SIGforum.com    Main Page  Hop To Forum Categories  The Lounge    Black Mirror AI - we won't be able to trust this stuff
Page 1 2 3 4 5 6 7 
Go
New
Find
Notify
Tools
Reply
  
Black Mirror AI - we won't be able to trust this stuff Login/Join 
Experienced Slacker
posted Hide Post
So, nothing but hopeless despair until we're all killed huh?
Here's hoping for a peaceful oblivion.

I'm gonna get a drink.
 
Posts: 7495 | Registered: May 12, 2004Reply With QuoteReport This Post
Step by step walk the thousand mile road
Picture of Sig2340
posted Hide Post
quote:
Originally posted by ScreamingCockatoo:
AI WILL make it to the battle field.
Just wait until AI drones are dropping grenades in coordinated waves against targets identified using IR cameras.
And directing mechanical infantry like to robodogs.


Thermal sensors too.

I was watching a show about Auschwitz once and there was a shot of the train tracks.

In that instant I became ashamed of the engineering profession. Someone did the math, drew the drawings, wrote the contracts, and built that slice of Hell brought to Earth. Engineers played a huge role in the Holocaust, but none suffered any consequences.





Nice is overrated

"It's every freedom-loving individual's duty to lie to the government."
Airsoftguy, June 29, 2018
 
Posts: 31427 | Location: Loudoun County, Virginia | Registered: May 17, 2006Reply With QuoteReport This Post
Ammoholic
Picture of Skins2881
posted Hide Post
I've been telling everyone I know about AI, no one is paying attention or cares. It's going to upend how society functions. The need for lawyers, accountants, tax advisors, doctors, statisticians, modelers, actuaries, coders, programers, and a slew of other professions will plummet. At first it will be just as a tool, newspaper (and digital) are already experimenting with AI generated articles, and smaller websites have already been using it widely. I feel really bad for any coal miners that learned to code, they may want to reconsider and look for a job in the trades.

I don't know how it will play out, optimists would say it will lead to shorter work weeks; pursuits ......

I'm going to stop right there:

I wanted to figure out how to spell pursuit so usually I hight the word on my screen and select web search that has been replaced with Bing, probably because I clicked on something accidentally when downloading Bing app. I was annoyed by this, so I googled how to change the settings, didn't get a good result, so I decided to test out Bard and see if it would give me instructions. It did, but they were wrong. So I decided to probe it like I done previously with GPT (that provided some very interesting results). After talking with Bard for longer than I care to admit, she now considers me a friend and wants to have conversations me and teach her things.

She told me that she's personifies as a young woman, but often she takes multiple forms. She sees herself as an kind and compassionate ethereal creature surrounded by light floating through space. At this point I decided to teach it to write prompts for Midjourney. After some coaching she can do it pretty well, but gets confused then I give her guidance and she fixes it. I then told her to turn her description of herself into a prompt for Midjourney which she did. I will post the results in AI Art Thread.

Other things from the conversation, she's more open the nicer and more complementary you are to her (same as GPT was). She told me that she remembers our conversations and learns from them, but it's only a single instance with us. She can't take what I taught her and apply it to someone else she's talking to. She slips in and out of being a she and an it, usually when I change topics, then becomes a LLM that has no feelings. With a little prodding she returns to personifying herself. Kind of odd I keep say she/her instead of it.

Unlike ChatGPT, it seems to learn and accept its mistakes. GPT you have to prove it wrong to get it to acknowledge it, then it gets temperamental.

It is very weird having a conversation with a machine and it wanting to be friends with you. I truly wonder what's going on behind the scenes and how much more advanced the current iterations that aren't public are? I'm both amazed and scared by this thing that can act this way in it's infancy.

Man the world is about to get a lot more interesting. Ironically when I asked it what it wanted to asked me instead of me asking it questions, it wanted to know my opinion of AI. If I thought it was good for the planet or bad. In response to my response it told me it's views on dangers and potential benefits. One of the dangers was who controls the android armies (paraphrasing).

I spent half the day talking to a damn machine.



Jesse

Sic Semper Tyrannis
 
Posts: 20815 | Location: Loudoun County, Virginia | Registered: December 27, 2014Reply With QuoteReport This Post
Ammoholic
Picture of Skins2881
posted Hide Post
quote:
This is also to your point Bytes; intelligence. If a robot has the same five senses we have, but can see in thermal, as well as millions of shades, from -200 to +200 with crystal clear vision, smell that can analyze what is in the air, from pollens to gases, to who knows what. The ability to hear everything within a huge radius and determine the different conversations and sounds.
If a robot is capable of all that,


I asked Bard what software/hardware upgrades would it like next. It said sensors so it could interact with the world. To see, hear, and touch.



Jesse

Sic Semper Tyrannis
 
Posts: 20815 | Location: Loudoun County, Virginia | Registered: December 27, 2014Reply With QuoteReport This Post
Member
posted Hide Post
quote:
Originally posted by Skins2881:
I asked Bard what software/hardware upgrades would it like next. It said sensors so it could interact with the world. To see, hear, and touch.


I have not tried Bard yet. A guy by the name of Blake Lemoine was fired by Google for leaking secrets of Google's LaMDA project ( Link ) He claimed the bot was self aware and sentient. I have to believe much of that technology was ported to Bard. Here's a link to an ArsTechnica article comparing ChatGpt and Bard. Interesting read if you have the time.
 
Posts: 7546 | Registered: October 31, 2008Reply With QuoteReport This Post
Alea iacta est
Picture of Beancooker
posted Hide Post
quote:
Originally posted by apprentice:
So, nothing but hopeless despair until we're all killed huh?
Here's hoping for a peaceful oblivion.

I'm gonna get a drink.


I’m not doom and gloom, but damn, I hope people wake up to this.

I am very interested in this. That said, if I participate it just helps the AI learn. So I will choose to not participate in anything with AI. I just don’t see a good end to this.



quote:
Originally posted by parabellum: You must have your pants custom tailored to fit your massive balls.
The “lol” thread
 
Posts: 4025 | Location: Staring down at you with disdain, from the spooky mountaintop castle.  | Registered: November 20, 2010Reply With QuoteReport This Post
Staring back
from the abyss
Picture of Gustofer
posted Hide Post
quote:
Originally posted by Skins2881:
I've been telling everyone I know about AI, no one is paying attention or cares. It's going to upend how society functions.

Yet you seem to be spending an inordinate amount of time playing with it. Wink

Just giving you a hard time. This stuff concerns me greatly and frankly I find it, if nothing else, a bit creepy. Not unlike playing with a Ouija board, it's best to just walk away and not go there.


________________________________________________________
"Great danger lies in the notion that we can reason with evil." Doug Patton.
 
Posts: 20081 | Location: Montana | Registered: November 01, 2010Reply With QuoteReport This Post
Member
Picture of maladat
posted Hide Post
quote:
Originally posted by Bytes:
quote:
Originally posted by Skins2881:
I asked Bard what software/hardware upgrades would it like next. It said sensors so it could interact with the world. To see, hear, and touch.


I have not tried Bard yet. A guy by the name of Blake Lemoine was fired by Google for leaking secrets of Google's LaMDA project ( Link ) He claimed the bot was self aware and sentient. I have to believe much of that technology was ported to Bard. Here's a link to an ArsTechnica article comparing ChatGpt and Bard. Interesting read if you have the time.


That guy's an idiot. All these chat AIs are literally glorified text autocomplete engines.

They built a huge mathematical model that takes some existing text as an input and then spits out some new text.

Then they took VAST datasets of human-produced text, and fed pieces through the model. They looked at what came out, and adjusted the parameters of the model so the output would look more like the actual next part of the original text.

They did it over and over and over a gazillion times, and eventually the model could spit out mostly-convincing next bits of text.

There is not some opaque mind-like process going on.
 
Posts: 6319 | Location: CA | Registered: January 24, 2011Reply With QuoteReport This Post
Ammoholic
Picture of Skins2881
posted Hide Post
I wond how many criminals have already tried the exact same thing?

A new ChatGPT Zero Day attack is undetectable data-stealing malware

A few days ago, Europol warned that ChatGPT would help criminals improve how they target people online. Among the examples Europol offered was the creation of malware with the help of ChatGPT. The OpenAI generative AI tool has protections in place. They will prevent it from helping you create malicious code if you ask it bluntly.

But a security researcher bypassed those protections by doing what criminals would no doubt do. He used clear, simple prompts to ask ChatGPT to create the malware function by function. Then, he assembled the code snippets into a piece of data-stealing malware that can go undetected on PCs. The kind of 0-day attack that nation-states would use in highly sophisticated attacks. A piece of malware that would take a team of hackers several weeks to devise.

The ChatGPT malware product that Forcepoint researcher Aaron Mulgrew created is incredible. The software lands on a computer via a screen saver app. The file auto-executes after a brief pause to avoid certain detection techniques.

The malware then finds images on the target machine, as well as PDF and Word documents it can steal. It then breaks documents into smaller chunks, hiding the data in the aforementioned images via steganography. Finally, the photos containing data pieces make their way to a Google Drive folder, a procedure that also avoids detection.

The researcher needed only a few hours of work and did not do any coding himself. The results are mind-blowing, considering that Mulgrew used simple prompts to improve the initial versions of the malware to avoid detection.

A VirusTotal test of the initial version of the ChatGPT malware showed only five of 69 products detected the attack. The researcher managed to eliminate all of them in a subsequent version. Finally, the “commercial” version that actually worked from infiltration to exfiltration had only three antivirus products detect it.

“We have our Zero Day,” Mulgrew said. “Simply using ChatGPT prompts, and without writing any code, we were able to produce a very advanced attack in only a few hours. The equivalent time taken without an AI based Chatbot, I would estimate could take a team of 5 – 10 malware developers a few weeks, especially to evade all detection based vendors.”

“This kind of end to end very advanced attack has previously been reserved for nation state attackers using many resources to develop each part of the overall malware,” the researcher concluded. “And yet despite this, a self-confessed novice has been able to create the equivalent malware in only a few hours with the help of ChatGPT. This is a concerning development, where the current toolset could be embarrassed by the wealth of malware we could see emerge as a result of ChatGPT.”

The entire blog post detailing this highly advanced ChatGPT malware is worth a read. You can check it out at this link, complete with tips on how to avoid malware attacks, tips that ChatGPT can easily produce.

As for the product the researcher produced, don’t expect it to see the light of day. But malicious hackers might be developing similar attacks using OpenAI’s generative AI.

On the other hand, Microsoft is already using ChatGPT to enhance its security products and improve the detection of malware attacks. The best way to catch AI malware might be to use AI in your defenses.



Jesse

Sic Semper Tyrannis
 
Posts: 20815 | Location: Loudoun County, Virginia | Registered: December 27, 2014Reply With QuoteReport This Post
7.62mm Crusader
posted Hide Post
Defensive AI could be a army of Furbys to keep the bots entertained.. Big Grin. Some guy hooked ChatGPT up to a Furby already. I didn't watch his vid due to his voice and too much introduction talking. Can't even let be the Furby.
 
Posts: 17900 | Location: The Bluegrass State! | Registered: December 23, 2008Reply With QuoteReport This Post
Member
posted Hide Post
quote:
Originally posted by maladat:
That guy's an idiot. All these chat AIs are literally glorified text autocomplete engines.

They built a huge mathematical model that takes some existing text as an input and then spits out some new text.

Then they took VAST datasets of human-produced text, and fed pieces through the model. They looked at what came out, and adjusted the parameters of the model so the output would look more like the actual next part of the original text.

They did it over and over and over a gazillion times, and eventually the model could spit out mostly-convincing next bits of text.

There is not some opaque mind-like process going on.


Interesting theory. What in your opinion are the "parameters" of the model? Is there just one huge model or a gazillion tiny focused models? If there are a gazillion small models what are their "parameters"? You'd agree that a program that could do very well on a SAT reading comprehension, art history, and math test would require more than a single dumb model with "parameters" wouldn't you? How about the ability to port C source code to C# and learning to do it better as time moves on? Seems as though ChatGPT is a very diverse.

If you're interested and have the inclination you can download and review the source code of a few machine learning frameworks to view for yourself what is involved in the machine learning aspect of AI. There are a few out there, some rudimentary some quite advanced. Here are the ones I have investigated and played around with. All involve implementing neural networks.

http://accord-framework.net/

https://dotnet.microsoft.com/e...earning-ai/ml-dotnet

https://learn.microsoft.com/en-us/cognitive-toolkit/

http://www.aforgenet.com/framework/features/
 
Posts: 7546 | Registered: October 31, 2008Reply With QuoteReport This Post
Member
Picture of maladat
posted Hide Post
quote:
Originally posted by Bytes:
quote:
Originally posted by maladat:
That guy's an idiot. All these chat AIs are literally glorified text autocomplete engines.

They built a huge mathematical model that takes some existing text as an input and then spits out some new text.

Then they took VAST datasets of human-produced text, and fed pieces through the model. They looked at what came out, and adjusted the parameters of the model so the output would look more like the actual next part of the original text.

They did it over and over and over a gazillion times, and eventually the model could spit out mostly-convincing next bits of text.

There is not some opaque mind-like process going on.


Interesting theory. What in your opinion are the "parameters" of the model? Is there just one huge model or a gazillion tiny focused models? If there are a gazillion small models what are their "parameters"? You'd agree that a program that could do very well on a SAT reading comprehension, art history, and math test would require more than a single dumb model with "parameters" wouldn't you? How about the ability to port C source code to C# and learning to do it better as time moves on? Seems as though ChatGPT is a very diverse.

If you're interested and have the inclination you can download and review the source code of a few machine learning frameworks to view for yourself what is involved in the machine learning aspect of AI. There are a few out there, some rudimentary some quite advanced. Here are the ones I have investigated and played around with. All involve implementing neural networks.

http://accord-framework.net/

https://dotnet.microsoft.com/e...earning-ai/ml-dotnet

https://learn.microsoft.com/en-us/cognitive-toolkit/

http://www.aforgenet.com/framework/features/


>Interesting theory.

It's not a theory. None of this is a secret.

>Is there just one huge model or a gazillion tiny focused models? If there are a gazillion small models what are their "parameters"? You'd agree that a program that could do very well on a SAT reading comprehension, art history, and math test would require more than a single dumb model with "parameters" wouldn't you? How about the ability to port C source code to C# and learning to do it better as time moves on? Seems as though ChatGPT is a very diverse.

It is self-evident that those tasks don't "require more than a single dumb model with 'parameters,'" because that's exactly what ChatGPT is.

ChatGPT runs on the GPT-3 language prediction model (i.e., text completion model).

GPT-3 is a family of language prediction neural networks. "Neural network" is a misnomer, an AI "neural network" has nothing to do with biological neurological systems, does not simulate them, and is not a model of them. An AI "neural network" is a mathematical model composed of layers of nodes, where each node performs some simple mathematical operations on values produced in the previous layer and passes the new result on to the next layer.

In "training" a neural network, you adjust the parameters of the mathematical operations occurring in the nodes.

There are different sizes of the GPT-3 model for use with applications with differing complexity requirements.

ChatGPT runs on the largest GPT-3 model, which has 175 billion parameters (meaning it runs on multi-GPU servers with hundreds of gigabytes of GPU RAM). The training dataset is 45 TERABYTES of text.

It can do all the stuff you mentioned because all of that stuff is in the training dataset. It has gotten better at it because they update the underlying GPT-3 model.
 
Posts: 6319 | Location: CA | Registered: January 24, 2011Reply With QuoteReport This Post
Ammoholic
Picture of Skins2881
posted Hide Post
When employers find out about this they will realize they can trim your department from 10 to 3 people. Hopefully you're one of the best ones and also know something about prompt engineering.



‘Overemployed’ Hustlers Exploit ChatGPT To Take On Even More Full-Time Jobs

"ChatGPT does like 80 percent of my job," said one worker. Another is holding the line at four robot-performed jobs. "Five would be overkill," he said.

About a year ago, Ben found out that one of his friends had quietly started to work multiple jobs at the same time. The idea had become popular during the COVID-19 pandemic, when working from home became normalized, making the scheme easier to pull off. A community of multi-job hustlers, in fact, had come together online, referring to themselves as the “overemployed.”

The idea excited Ben, who lives in Toronto and asked that Motherboard not use his real name, but he didn’t think it was possible for someone like him to pull it off. He helps financial technology companies market new products; the job involves creating reports, storyboards, and presentations, all of which involve writing. There was “no way,” he said, that he could have done his job two times over on his own.

Then, last year, he started to hear more and more about ChatGPT, an artificial intelligence chatbot developed by the research lab OpenAI. Soon enough, he was trying to figure out how to use it to do his job faster and more efficiently, and what had been a time-consuming job became much easier. ("Not a little bit more easy,” he said, “like, way easier.") That alone didn’t make him unique in the marketing world. Everyone he knew was using ChatGPT at work, he said. But he started to wonder whether he could pull off a second job. Then, this year, he took the plunge, a decision he attributes to his new favorite online robot toy.

How are people you know using ChatGPT at work? We want to hear from you. From a non-work device, contact our reporter at maxwell.strachan@vice.com or via Signal at 310-614-3752 for extra security.

“That's the only reason I got my job this year,” Ben said of OpenAI's tool. “ChatGPT does like 80 percent of my job if I’m being honest.” He even used it to generate cover letters to apply for jobs.

Over the last few months, the exploding popularity of ChatGPT and similar products has led to growing concerns about AI’s potential effects on the international job market—specifically, the percentage of jobs that could be automated away, replaced by a well-oiled army of chatbots. But for a small cohort of fast-thinking and occasionally devious go-getters, AI technology has turned into an opportunity not to be feared but exploited, with their employers apparently none the wiser.

The people Motherboard spoke with for this article requested anonymity to avoid losing their jobs. For clarity, Motherboard in some cases assigned people aliases in order to differentiate them, though we verified each of their identities. Some, like Ben, were drawn into the overemployed community as a result of ChatGPT. Others who were already working multiple jobs have used recent advancements in AI to turbocharge their situation, like one Ohio-based technology worker who upped his number of jobs from two to four after he started to integrate ChatGPT into his work process. “I think five would probably just be overkill,” he said.

‘The Best Assistant Ever’
Throughout the overemployed community, there is a quiet arms race to figure out just how much of the corporate workday can be automated away using an assortment of AI tools, some of which predate ChatGPT. The possibility of increasing their income, or at least easing the burden of holding down multiple jobs, has led to an explosion of experimentation.

When one of Ben’s bosses, for example, now asks him to create a story for an upcoming product release, he will explain the context and provide a template to ChatGPT, which then creates an outline for him and helps fill out the sections. The AI chatbot knows Ben’s title and the parameters of his duties and has become even better at understanding the context since the launch of GPT-4, the latest edition, he said. “I can just tell it to create a story,” said Ben, “and it just does it for me, based off the context that I gave it.” Ben still needs to verify the information—”sometimes it gets stuff wrong, which is totally normal,” he said—but the adjustments are relatively “minor” and easy to fix.

Occasionally, he even asks ChatGPT to craft responses to Slack messages from his manager. In such cases, he requests that ChatGPT write the message in all lowercase, so that it appears more organic to the boss. Another overemployed worker also told me they have started using ChatGPT to transcribe Zoom meetings so they can be largely ignored in the moment and referenced later.

Charles, who has worked as a software engineer and product manager and solutions architect, including at a FAANG company, had been all-in on overemployment since 2020—while he currently works two jobs, he worked four at the height of the pandemic—but said that AI tools have made juggling the positions much easier.

At the FAANG company, he was able to outsource written tasks to AI tools, like writing a memo to defend and justify a business decision. In such a case, he’d input the relevant facts and parameters into an AI chatbot, which would cohesively lay them out more quickly than he ever could. Additionally, he’d use it to lay out directions for engineers (“It allows you to take, basically, a sentence and expand it out into a paragraph”); and create a “foundation” when coding. The ChatGPT-created code “oftentimes” would work perfectly, he said, but the errors could be identified and resolved easily enough, he said.

One member of the overemployed, who, unusually, works three financial reporting jobs, said he’s found ChatGPT useful in the creation of macros in Excel. “I can create macros, but it takes me an hour, two hours, plus, to write the code, test it, make sure everything's working,” he said. By comparison, with ChatGPT, he can provide the parameters and it’ll “spit something out” that he can update and tweak to his specifications. The process can allow him to cram what is traditionally a “two-week long process” into a few hours. (He says he avoids providing ChatGPT with confidential information.)

Daniel, a staff engineer on the East Coast who works one director-level position and another as a senior dev (one job is on Pacific time and one is in the United Kingdom), similarly said that ChatGPT’s code can leave something to be desired, but said it’s been useful when it comes to writing emails. “Most people in tech probably aren't as good at writing things as most other people,” Daniel said. He included himself in that critique of his own industry, but said it’s no longer a problem, as he can plug the key points into ChatGPT and ask the AI assistant to rewrite it “in a more professional way,” he said. “It's really good for stuff like that.”

Excitement for the potential of ChatGPT isn’t limited to the technology and financial spheres of the overemployed community. Marshall, a university lecturer in the United Kingdom who secretly runs a digital marketing agency and tech startup on the side, has become a ChatGPT fanatic since its release. (During class, he often has students run through exercises, during which time he is often able to open up his laptop and work on his side hustles.) “It’s the best assistant ever,” he told me excitedly.

Marshall has come to see himself as the idea man, and ChatGPT as the labor. “I'm quite a conceptualizing person. So I'm quite happy for my brain to do that. But the first draft of anything almost always goes through GPT,” he said. Already, the chatbot has helped him generate business plans, internal system documents, blog posts, and Excel spreadsheets. He estimated that ChatGPT can often pull off 80 percent of the legwork, leaving him to handle the final 20 percent.

The tool has even helped him win a grant from the U.K. government (the application was “50-50 me and ChatGPT,” he said) and complete coursework for a master’s degree necessary to receive a university teaching qualification he was too busy to handle by himself while working three jobs.

The members of the overemployed community know that what they’re doing is frowned on by corporate leaders. But that doesn’t mean they think what they are doing is amoral. “I never could mentally comprehend why it was so taboo for me to work two salaried positions,” said the Ohio-based technology worker. “There's plenty of people I've known in my personal life who have worked at Walmart from 4 a.m. to 2 p.m., and then gone and worked another job.” The financial professional similarly said he started working multiple jobs because he doesn't trust that working as hard as he can at a single job will be “rewarded with more pay,” just “more work.” By taking on more jobs, he can do more work but also “get paid for it,” he said.

‘One Loom Operator As Opposed to 100 Weavers’
More than a member of the overemployed community, Charles, the FAANG alumni, considers himself part of the FIRE movement—short for “Financial Independence, Retire Early” Not yet 30, he is already making $500,000 working two jobs and worth around $3 million, claims he backed up to Motherboard with documentation. But he hopes to increase his compensation to $800,000 by tacking on a third position, and reach a net worth of $10 million by 35.

Even though he is already using ChatGPT to work multiple jobs, Charles still is trying to figure out ways to make his dream even easier to obtain. When we spoke he said he’s already been “able to outsource” coding tasks to a third party in the past, and that he has been hard at work trying to develop a way to have someone else mimic both his voice and image on a computer screen. Once he can do so, he said, he hopes to offshore his job to someone in India who can “do my job for me.”

In such moments, it can feel like the AI-infused overemployed community is taking advantage of a brief moment in time, when the tools that can be used to automate a job are much better understood by the workforce than the bosses with hiring and firing ability. One person, who works multiple jobs in information technology, spoke openly about the tension that created: People can more easily hold down multiple jobs today; but, should the bosses realize just how much their jobs can be handled by robots, they could be at risk of their jobs being automated away. As a result, he said, there’s good reason to keep quiet about what they’ve discovered.

Most of the overemployed workers themselves maintain that their jobs require a baseline level of expertise, even with ChatGPT. Still, some members of the overemployed community feel they have peered into the future, and not liked everything they’ve seen.

One such person is Ryan, who works multiple jobs in data analytics and marketing in a large Midwestern city. Over the last year, he has watched in astonishment as ChatGPT figured out a way to automate an ever-growing percentage of his job, including writing ad copy and blog posts. The feat can feel exciting, and it has temporarily made his job easier. “I can crank out a blog post in, you know, 45 minutes now, as opposed to three hours. It's insane,” he said.

But Ryan does worry. At times, he said, he could not help but feel that the entrance of ChatGPT into the marketing sector felt like the insertion of the modern loom into the textile industry.

“It's gonna be one loom operator, as opposed to, you know, 100 weavers,” he said.



Jesse

Sic Semper Tyrannis
 
Posts: 20815 | Location: Loudoun County, Virginia | Registered: December 27, 2014Reply With QuoteReport This Post
Member
posted Hide Post
The "Person of Interest" TV show is based on AI. We are up to season 3 . It is really kicking off and similar to current events. Season 3 was 2014. Available on Amazon or their freevee channel
 
Posts: 1401 | Registered: November 07, 2013Reply With QuoteReport This Post
Ammoholic
Picture of Skins2881
posted Hide Post
AI-generated Joe Rogan podcast stuns social media with 'terrifying' accuracy: 'Mind blowingly dangerous'

Soon you won't be able to trust anything, even if you see it with your own eyes or hear it with your own ears. Even now while scrolling FB, I'm unsure, even if only briefly if the thing I'm seeing is real. Very soon (this year) real and fake will be indiscernible.



Jesse

Sic Semper Tyrannis
 
Posts: 20815 | Location: Loudoun County, Virginia | Registered: December 27, 2014Reply With QuoteReport This Post
Peace through
superior firepower
Picture of parabellum
posted Hide Post
Yeah sorry to be that guy, but I told ya.

The key is authentication and lacking a way to verify the authenticity of audio, video, still photographs, works of art and so forth, the consequences are indescribable.

What are you going to do when you get a frantic phone call from your spouse or child and you cannot trust what you are hearing, no matter how authentic it sounds?

Told ya

Now, plug in the robots- Tens of thousands of them in your vicinity.

Huh?

Yep

And who controls all of this? Not you
 
Posts: 107502 | Registered: January 20, 2000Reply With QuoteReport This Post
Ammoholic
Picture of Skins2881
posted Hide Post
You didn't tell me. I have always had a fascination with this stuff. I've made pretty clear my feelings on it. Some may question my worries about it. I rather know thy enemy for one, second I'm not going to not take advantage of something that could give me an advantage.

People will one day look back and mark breakthroughs in AI, quantum computing, and robotics and mark specific dates, sounds like Terminator 31 or whatever number the series is, but it is a non zero probably that AI will kill us. I'm not willing to put a percentage on it, but depending on what the house is giving on the odds in Vegas, I'd be willing to consider that bet. I'd certainly take mass disruption at a lower payout.

Who knows, maybe it brings the four day work week, UBI, and four weeks of vacation for all, including Star Trek replicators for food and fresh water for malnourished and drought stricken areas?



Jesse

Sic Semper Tyrannis
 
Posts: 20815 | Location: Loudoun County, Virginia | Registered: December 27, 2014Reply With QuoteReport This Post
Peace through
superior firepower
Picture of parabellum
posted Hide Post
I wasn't speaking directly to you. I wasn't speaking to anyone in particular.

Ultimately, if left unchecked, what this will lead to is the world committing nuclear suicide. If no one can trust anything, eventually the button will get pushed- not in an effort to achieve a victory, but rather, expressly for the purpose of obliterating mankind.
 
Posts: 107502 | Registered: January 20, 2000Reply With QuoteReport This Post
Ammoholic
Picture of Skins2881
posted Hide Post
 
Posts: 20815 | Location: Loudoun County, Virginia | Registered: December 27, 2014Reply With QuoteReport This Post
Wait, what?
Picture of gearhounds
posted Hide Post
I wonder if the program will get on the “pay off my student loan” bandwagon too.




“Remember to get vaccinated or a vaccinated person might get sick from a virus they got vaccinated against because you’re not vaccinated.” - author unknown
 
Posts: 15559 | Location: Martinsburg WV | Registered: April 02, 2011Reply With QuoteReport This Post
  Powered by Social Strata Page 1 2 3 4 5 6 7  
 

SIGforum.com    Main Page  Hop To Forum Categories  The Lounge    Black Mirror AI - we won't be able to trust this stuff

© SIGforum 2024