Go ![]() | New ![]() | Find ![]() | Notify ![]() | Tools ![]() | Reply ![]() | ![]() |
Member![]() |
Bing did this... Free Admission! https://blogs.uml.edu/bleepblorp/2023/01/23/welcome/ Collecting dust. | |||
|
Peace through superior firepower ![]() |
Now, y'see there? That stupid computer did not even understand 'bleep blorp'. Useless. | |||
|
Peace through superior firepower ![]() |
![]() | |||
|
Member |
Seems the developers at ChatGPT have been improving a bit. Dragging this thread back from the dead and this post is a bit geeky... I asked ChatGPT to design me a normalized database that would store a person with their addresses and phone numbers. It's answer was very good given the loosely asked question. I then I asked Chat to write me an API in C# that would retrieve a person record and return it to a requesting web page. The API code was very close to what I asked. Two questions if anyone would like to try: * Write me a database normalized schema to store names, addresses, and phone numbers in Sql Server * Write me an API in c# that will retrieve a person record with it's child records and return the result in json In short, the responses it gave would take a beginning developer quite a while to come up with, even with the help of Google. Chat couldn't do that nearly as well two weeks ago. It does appear that some folks made ChatGPT go crazy after reading this: https://arstechnica.com/inform...rs-technica-article/ | |||
|
Ammoholic![]() |
That is one of the funniest things I've seen in a while. Jesse Sic Semper Tyrannis | |||
|
thin skin can't win![]() |
I made very advanced use of this recently when having to fill in a reason for unsubscribing from a service. I asked for a sentence of gibberish and got this. Saved for future use.... Flurple crumpets wiggle and squiggle on the zimzam while the blibber-blobber doozles in the jib-jab. You only have integrity once. - imprezaguy02 | |||
|
Back, and to the left ![]() |
Elon Musk@elonmusk (Twitter post) Feb 17 'What we need is TruthGPT' | |||
|
Baroque Bloke![]() |
Q: When will John Durham’s report be issued? A: At this time, it is not known when John Durham's report will be issued. The United States Department of Justice has stated they expect the report to be released in the coming months, but no specific date has been given. Serious about crackers | |||
|
Thank you Very little ![]() |
| |||
|
Oriental Redneck![]() |
ChatGPT reportedly made up sexual harassment allegations against a prominent lawyer Aaron Mok Apr 6, 2023, 11:45 AM OpenAI's buzzy ChatGPT falsely accused a prominent law professor of sexual assault based on a fake source, The Washington Post reported. Last week, Jonathan Turley, a law professor at George Washington University, got a disturbing email saying that his name appeared on a list of "legal scholars who have sexually harassed someone" that another lawyer had asked the AI chatbot to generate, the Post reported. The chatbot made up claims that Turley made sexually charged remarks and tried to touch a student during a class trip to Alaska, according to the Post. In its response, ChatGPT apparently cited a Washington Post article published in 2018 — but the publication said that article doesn't exist. When Insider tried to replicate the responses on ChatGPT, the chatbot refused to answer. "It is inappropriate and unethical to generate a list of individuals who have allegedly committed such a heinous crime without any verifiable evidence or legal convictions," the bot responded. Microsoft's Bing chatbot, which is powered by GPT-4, also would not respond to Insider's prompts, but repeated the claims about Turley to the Post, the publication reported. "It was a surprise to me since I have never gone to Alaska with students, The Post never published such an article, and I have never been accused of sexual harassment or assault by anyone," Turley wrote in a blog post regarding the accusations that he sent to Insider when reached for comment. In the post, Turley added that he initially thought the accusation was "comical," but that "after some reflection," it "took on a more menacing meaning." The claims, he told the Post, were "quite chilling" and "incredibly harmful." OpenAI did not respond to a request to comment from Insider, but Niko Felix, a spokesperson for OpenAI, told the Post, "When users sign up for ChatGPT, we strive to be as transparent as possible that it may not always generate accurate answers. Improving factual accuracy is a significant focus for us, and we are making progress." The accusations highlight how the language models behind popular AI chatbots are prone to error. Kate Crawford, a professor at the University of Southern California at Annenberg and researcher at Microsoft Research, told the Post these claims were most likely "hallucinations," referring to facts the AI chatbots make up as "falsehoods and nonsensical speech." That may be, in part, because OpenAI's language models are trained on troves of online data from places like Reddit and Wikipedia, where information isn't fact-checked. These hallucinations are nothing new. Last December, Insider's Samantha Delouya asked ChatGPT to write a news article as a test, only to find it filled with misinformation. A month later, tech news site CNET issued a string of corrections after it published a number of AI-generated articles that got basic facts wrong. A recent study from the Center for Countering Digital Hate found that Google's AI-chatbot Bard generated "false and harmful narratives" on topics like the Holocaust and gay conversion therapy. For Turley, AI-generated misinformation may be consequential. He said the false sexual harassment accusations could damage his reputation as a legal scholar. "Over the years, I have come to expect death threats against myself and my family as well as a continuing effort to have me fired at George Washington University due to my conservative legal opinions," Turley wrote in his blog post. "As part of that reality in our age of rage, there is a continual stream of false claims about my history or statements." "AI promises to expand such abuses exponentially," he said. Q | |||
|
Ammoholic![]() |
^^^ It also threatened to spread false information about a reporter and tried to get him to leave his wife, it called another one fat. Jesse Sic Semper Tyrannis | |||
|
safe & sound![]() |
Has anybody tried to turn these things on one another yet? | |||
|
Ammoholic![]() |
I'd like to see them play chess or battle to death. Jesse Sic Semper Tyrannis | |||
|
His Royal Hiney![]() |
I've been using it a lot recently. I asked a question that confirmed the answer I researched years ago but could not understand at the time. I don't capture screenshots of it's answers and when I asked the same question because I wanted to write better notes, it completely negated the previous answer. The same question repeated on new sessions yields the new answer. Also, it does commit logical errors. I asked two related questions in a row and it's second answer negated the previous answer. I had to remind it by asking a third question referring to its first answer, and it admitted I was right and changed its answer. "It did not really matter what we expected from life, but rather what life expected from us. We needed to stop asking about the meaning of life, and instead to think of ourselves as those who were being questioned by life – daily and hourly. Our answer must consist not in talk and meditation, but in right action and in right conduct. Life ultimately means taking the responsibility to find the right answer to its problems and to fulfill the tasks which it constantly sets for each individual." Viktor Frankl, Man's Search for Meaning, 1946. | |||
|
Member![]() |
It has raised the concern at Google. ______________________________________________ Life is short. It’s shorter with the wrong gun… | |||
|
Powered by Social Strata | Page 1 2 3 4 |
![]() | Please Wait. Your request is being processed... |
|