In a year or two, this will be so common, we'll hear about it all the time. College students will create their thesis (thesises? thesi?) with it. High school students will use it. Grade school kids. Lawyers, consultants, authors, "journalists" you name it.
I heard AI helped or is helping create a new antibiotic. That's the first thing I've heard about where this stuff can be useful. Hell, it might even break Big Pharma. That would be the sweetest.
But, overwhelmingly, I think this stuff will be detrimental for the human race. It will accelerate the encroaching ignorance of humans.
_________________________________________________________________________ “A man’s treatment of a dog is no indication of the man’s nature, but his treatment of a cat is. It is the crucial test. None but the humane treat a cat well.” -- Mark Twain, 1902
Posts: 9571 | Location: Northern Virginia | Registered: November 04, 2005
A good friend of mine is a College English Professor. At present there are programs that easily detect plagerism. Hopefully there will be a program that detects ChatGPT. Besides the moral issue there is the danger that it creates for society.
Posts: 17901 | Location: Stuck at home | Registered: January 02, 2015
Regarding the creation of new drugs by AI, I do foresee an issue. We are being told that soon, this stuff will possess greater intelligence than the most intelligent humans, which means greater cunning, and therefore, less trustworthiness.
I can see a not small portion of the human population who would be less than willing to take these drugs. I can envision conspiracy theories which claim things such as these AI-created drugs being used to enslave mankind or to create a class of human zombies which would set out to annihilate non-compliant humans, or drugs which would be at first beneficial in treating diseases and conditions but then later kill those who take them.
A non-human entity which you know is smarter than you would raise great suspicion among a portion of humanity and there would be no way to dissuade these people from their innate mechanism of self-preservation.
This goes beyond drugs and medical advice and treatment; legal counseling, career counseling, financial advice, you name it.
Originally posted by ZSMICHAEL: A good friend of mine is a College English Professor. At present there are programs that easily detect plagerism. Hopefully there will be a program that detects ChatGPT. Besides the moral issue there is the danger that it creates for society.
My history and English composition colleagues saw a large uptick in AI created term papers this last semester. The plagiarism checkers currently employed do a decent job of catching it, but students are getting smarter at getting around the algorithms. AI is being developed to detect other AI-created content (might already be operational), but I doubt we'll catch every offender.
One solution is to interview each student about their paper..."Tell me more about why you wrote ______"? That really only works for a small class, though. Another option is to require handwritten in-class writing samples at the beginning of the semester and have AI analyze it for writing level. Then you have a baseline to compare future writings against, and any unusually large increases in proficiency would stand out. You could even test for statistically significant differences in writing level.
I'm sure glad I teach engineering and don't have to worry about it as much.
^^^^^^^^^^^^^ Thanks for your response. Graduate students for the most part do not try this stuff because there is too much at stake. Undergraduates are another matter. The methods you suggested are good but putting the fear of God in them helps as well. When I was an undergrad they simply dismissed you from the University. Of course that is not likely at present.
Posts: 17901 | Location: Stuck at home | Registered: January 02, 2015
I have had several students turn in work that looked like ChatGPT/ AI work. If I was confident, as in the nature of the writing prompt was not really addressed, the student got zero. In other case the student got reduced grades because the nature of the questions made it difficult to craft answers or work without significant data input by the student. Which might negate the entire purpose (less work) of using CHATGPT. In every case, not a single student challenged me on the poor grade. Which was telling in, and of, itself.