SIGforum.com    Main Page  Hop To Forum Categories  The Lounge    Black Mirror AI - we won't be able to trust this stuff
Page 1 2 3 4 5 6 7 
Go
New
Find
Notify
Tools
Reply
  
Black Mirror AI - we won't be able to trust this stuff Login/Join 
Member
Picture of myrottiety
posted Hide Post
quote:
Originally posted by Chris17404:
Interview with Elon Musk on AI. He seems to have his head screwed on straight.

[FLASH_VIDEO]<iframe frameborder="0" height="315" src="https://www.youtube.com/embed/a2ZBEC16yH4" width="560"></iframe>[/FLASH_VIDEO]


Elon / Tesla are also at the forefront of Computer Vision / Machine Learning. They arguably have one of the best trained Computer Vision AI systems in the world. They even have "AI Day" every year where they go over progress.

So not sure where his bias may lay.




Train how you intend to Fight

Remember - Training is not sparring. Sparring is not fighting. Fighting is not combat.
 
Posts: 8851 | Location: Woodstock, GA | Registered: August 04, 2005Reply With QuoteReport This Post
Alea iacta est
Picture of Beancooker
posted Hide Post
This “chat bot” as it’s so easily dismissed by most, is scary. Im interested and I’m really not comfortable with how fast the technology is advancing.

I feel like some of us are Sarah Connor in the Terminator movies. Im not saying the movies will be reality, I’m saying that people think I’m weird as I don’t embrace AI. It’s so easily dismissed by most as no big deal, and they just don’t know or understand.

The guy makes a good point; “Chat GPT is an infant. It’s a three month old. It can’t even walk yet.” Think about that… how advanced will it become?

This is pretty interesting. Im not sure why the “DAN” script is blurred. Is it specific script to manufacture a known reply, or is it blurred so people cannot use it in a negative way?




quote:
Originally posted by parabellum: You must have your pants custom tailored to fit your massive balls.
The “lol” thread
 
Posts: 4025 | Location: Staring down at you with disdain, from the spooky mountaintop castle.  | Registered: November 20, 2010Reply With QuoteReport This Post
Peace through
superior firepower
Picture of parabellum
posted Hide Post
Oh, so NOW they're upset about this stuff. The music industry- it looks like the music industry (if it can even be referred to as music anymore) is upset because this AI stuff created a Drake song- whoever the fuck that is. Oh, wait, isn't he the guy who had that smash hit "My Bitches, My Balls"?

Oh, so his "music" is being encroached upon by some AI server, probably sitting in California.

There will be thousands of these songs made, glomming off of countless artists, dead and alive. Well, they need to sue a bunch of people, of course. Remember 20 years ago when these assholes went after all those individuals who had simply downloaded music that was being put out for free?

I don't give a shit about these people. I don't give a shit about their intellectual property because they're such fascists about it. Some ten year old kid posts a video on youtube using copyrighted music in the background and youtube takes it down- or, if the clip becomes viral and gets 40 million clicks, the kid doesn't get the check. No, the "musical artist" gets it, even if the reason the clip went viral hasn't a single thing to do with their stupid music.
So, the "music industry" can sit and spin as far as I'm concerned, and I hope they are deluged with copyright infringement and with fake songs that eat into their profits, and when that starts to happen, then they may end up doing the rest of us a favor with their thrashing about.
 
Posts: 107583 | Registered: January 20, 2000Reply With QuoteReport This Post
thin skin can't win
Picture of Georgeair
posted Hide Post
quote:
Worst case you just go unplug the server.


It's cute someone things there is "a" server in 2023. Let alone 2043.



You only have integrity once. - imprezaguy02

 
Posts: 12417 | Location: Madison, MS | Registered: December 10, 2007Reply With QuoteReport This Post
Staring back
from the abyss
Picture of Gustofer
posted Hide Post
quote:
Worst case you just go unplug the server.


Before or after the nukes are in the air?

"Greetings Professor Falken."

"Shall we play a game?"


________________________________________________________
"Great danger lies in the notion that we can reason with evil." Doug Patton.
 
Posts: 20099 | Location: Montana | Registered: November 01, 2010Reply With QuoteReport This Post
Member
Picture of maladat
posted Hide Post
quote:
Originally posted by Georgeair:
quote:
Worst case you just go unplug the server.


It's cute someone things there is "a" server in 2023. Let alone 2043.


An instance of any of the current common large language models runs on a single server, frequently one of multiple virtualized servers running on a single actual machine.

Some of the models are designed to run across multiple GPUs on a single server, but none of them are designed to be distributed across multiple machines. The way the models work, communication time between servers would be a tremendous performance bottleneck.

Training of some large models is done across multiple machines, with periodic integration of training progress, but each server is still running its own instance of the actual model. My recollection off the top of my head is that training GPT-3 (the LLM ChatGPT is based on) took something like 600,000 GPU-hours.

I have several different LLMs running right now on both my own machines and cloud servers as part of a technical evaluation.
 
Posts: 6319 | Location: CA | Registered: January 24, 2011Reply With QuoteReport This Post
Alea iacta est
Picture of Beancooker
posted Hide Post
Maladat, ChatGPT isn’t running on one single server. No matter how many virtual server instances you have running.

I am sure it’s multiple servers. You would need that just for the text. Not just that but how many GPU’s?

Definitely not one machine.



quote:
Originally posted by parabellum: You must have your pants custom tailored to fit your massive balls.
The “lol” thread
 
Posts: 4025 | Location: Staring down at you with disdain, from the spooky mountaintop castle.  | Registered: November 20, 2010Reply With QuoteReport This Post
Ammoholic
Picture of Skins2881
posted Hide Post
Even it was on one physical server, it's mirrored at, at least one DR site. More likely it's running on edge servers in many different locations for latency with each server running a number of instances.



Jesse

Sic Semper Tyrannis
 
Posts: 20821 | Location: Loudoun County, Virginia | Registered: December 27, 2014Reply With QuoteReport This Post
Member
posted Hide Post
quote:
Originally posted by Skins2881:
Even it was on one physical server, it's mirrored at, at least one DR site. More likely it's running on edge servers in many different locations for latency with each server running a number of instances.

Agreed, I can't imagine a web application that has that many users and you just walk up to a power box and shut off the power Big Grin
 
Posts: 7551 | Registered: October 31, 2008Reply With QuoteReport This Post
Member
posted Hide Post
I haven't been to the Code Project site in a while but went there today to look at their news articles. Looks like they're coming up with their own AI server that will be open source. https://www.codeproject.com/Ar...rver-AI-the-easy-way It will be interesting to see how this turns out. They've done some pretty good stuff (or at least assisted in the development) with data access frameworks. I've done some work with publicly available AI frameworks so I will definitely follow this project. Looks like it's had a lot of hits and downloads since the above link was published on 4/17.
 
Posts: 7551 | Registered: October 31, 2008Reply With QuoteReport This Post
Member
posted Hide Post
I am no fan of "60 Minutes", but a good friend set me a link to their recent story on AI and it is pretty interesting.

As posted on yootoob. . .

The AI revolution: Google's developers on the future of artificial intelligence

Google's CEO as well as other executives there and other 'pioneers' in this field are interviewed and discuss the technology and the impacts that it WILL have on society.

The technology is certainly amazing, but what bothered me about this piece is that the executives seemed be 'not caring all that much' about the negative fallout, impact and consequences that this technology will have upon society, and, nobody in the entire segment brought up the potential militarized weaponization of AI (I am guessing that it was discussed but edited out so as to not 'scare' anybody).


__________
"I'd rather have a bottle in front of me than a frontal labotomy."
 
Posts: 3476 | Location: Lehigh Valley, PA | Registered: March 27, 2007Reply With QuoteReport This Post
Member
Picture of maladat
posted Hide Post
OK. Some background to help clear up the misconceptions, and taking ChatGPT as an example.

Part 1:

ChatGPT is a type of algorithm called a neural network. It's basically a whole, whole bunch of "nodes" that each perform a simple mathematical operation. The nodes are arranged in interconnected layers.

When you plug some text into the input nodes, some math happens, the output goes to the next layer, some math happens, and so on, and then the output nodes spit out some text.

When you "train" the model, you you give it bits of "real" text as input, and then adjust the parameters in the nodes to make the output of the model look more like the "real" continuation of the text.

Ideally, if you do that long enough, the model can then produce text completions very similar to the "real" text completions for most of the training data. And, ideally, if your training data set is large enough and diverse enough, input that is not actually IN the training data will still be similar enough to enough of the training data that a reasonable text completion is generated.

Because of how interconnected the layers are, EXTREMELY fast memory access to the entire model is critical. Distributing the model across multiple machines would be a RUINOUS performance bottleneck.

Part 2:

There is not ONE ChatGPT entity that everyone "talks" to at the same time.

When you are talking to ChatGPT, you are interacting with one instance (copy, running program, whatever you want to call it) of the entire neural network model, which is running on ONE SERVER.

Similarly, one instance of ChatGPT can pretty much only "talk" to one person at a time.

When you get on the ChatGPT website to talk to ChatGPT, the website basically finds an available ChatGPT instance on a server somewhere and connects you to it.

So, yes, there are a lot of servers that each have their own instance of ChatGPT, but ChatGPT does NOT run on multiple servers in the distributed computing sense.

If you got a copy of the ChatGPT model, you could install it on your ONE server and be the only person to talk to it, and it would respond exactly the same way as any other ChatGPT instance anywhere else.

quote:
Originally posted by Beancooker:
Maladat, ChatGPT isn’t running on one single server. No matter how many virtual server instances you have running.

I am sure it’s multiple servers. You would need that just for the text. Not just that but how many GPU’s?

Definitely not one machine.


It ABSOLUTELY runs on one machine. High end industrial AI GPUs have 40-80GB of VRAM and high end AI servers frequently have 8 of those GPUs installed. The largest GPT-3 model is a few hundred GB. LLaMA, which outperforms GPT-3, with a clever technique that reduces model size, is only 10-20GB and will run on a single good consumer GPU.

The top offering from the cloud platform I am using right now is a single server with 96 CPU cores, 720GB of RAM, and 8 NVIDIA A100-80G GPUs. The A100-80G has 80GB of VRAM (and costs $15,000 - each!), so the server has 640GB of VRAM total. Active server instances are billed at about $25 per hour. It works out to about $20,000 per month - for one server. Absurd machines like that exist because this type of AI model is basically impossible to distribute across machines.

Beyond that, ChatGPT runs on the PyTorch framework, which does not even have the capability of running models distributed across multiple machines (it does allow distributed TRAINING - with a full instance of the model running on every machine involved).

This message has been edited. Last edited by: maladat,
 
Posts: 6319 | Location: CA | Registered: January 24, 2011Reply With QuoteReport This Post
Member
Picture of maladat
posted Hide Post
quote:
Originally posted by Skins2881:
Even it was on one physical server, it's mirrored at, at least one DR site. More likely it's running on edge servers in many different locations for latency with each server running a number of instances.


See above. There are many individual "ChatGPTs" running on many servers, but EACH ChatGPT is running on ONE server, and they don't talk to each other.
 
Posts: 6319 | Location: CA | Registered: January 24, 2011Reply With QuoteReport This Post
Ammoholic
Picture of Skins2881
posted Hide Post
quote:
Originally posted by maladat:
quote:
Originally posted by Skins2881:
Even it was on one physical server, it's mirrored at, at least one DR site. More likely it's running on edge servers in many different locations for latency with each server running a number of instances.


See above. There are many individual "ChatGPTs" running on many servers, but EACH ChatGPT is running on ONE server, and they don't talk to each other.


So then how do you unplug it then if it's housed everywhere? You keep contradicting yourself. Yes a single instance is running on one server, probably hundreds of instances, but there are hundreds of thousands of instances on thousands of machines. That's a lot of unplugging.

What if ChatGPT 5 or 19 becomes self aware? What it writes code that adds capabilities it wasn't meant to have, then injects that code into other instances and allows it to network itself?

They didn't teach it how to speak foreign languages, it just learned it on its own.



Jesse

Sic Semper Tyrannis
 
Posts: 20821 | Location: Loudoun County, Virginia | Registered: December 27, 2014Reply With QuoteReport This Post
thin skin can't win
Picture of Georgeair
posted Hide Post
I'm not the least bit worried about ChatGPT or any similar model or how many servers it lives on honestly.

My concern is any true artificial general intelligence engine, which we are quite a ways off from, will in no way be confined to any single device unless completely siloed. Completely will mean completely - no internet, outside power, wifi mechanism, nothing. And THAT defeats the whole mechanism of how they will continue to learn, let alone preserve itself.



You only have integrity once. - imprezaguy02

 
Posts: 12417 | Location: Madison, MS | Registered: December 10, 2007Reply With QuoteReport This Post
Member
posted Hide Post
I'm not worried about self aware, evil AI destroying humanity. I'm worried about all the ways evil people will use AI to further enslave and subjugate us.


No one's life, liberty or property is safe while the legislature is in session.- Mark Twain
 
Posts: 3533 | Location: TX | Registered: October 08, 2005Reply With QuoteReport This Post
Member
Picture of maladat
posted Hide Post
quote:
Originally posted by Skins2881:
quote:
Originally posted by maladat:
quote:
Originally posted by Skins2881:
Even it was on one physical server, it's mirrored at, at least one DR site. More likely it's running on edge servers in many different locations for latency with each server running a number of instances.


See above. There are many individual "ChatGPTs" running on many servers, but EACH ChatGPT is running on ONE server, and they don't talk to each other.


So then how do you unplug it then if it's housed everywhere? You keep contradicting yourself. Yes a single instance is running on one server, probably hundreds of instances, but there are hundreds of thousands of instances on thousands of machines. That's a lot of unplugging.

What if ChatGPT 5 or 19 becomes self aware? What it writes code that adds capabilities it wasn't meant to have, then injects that code into other instances and allows it to network itself?

They didn't teach it how to speak foreign languages, it just learned it on its own.


I have said all along in this thread that someday, with models that are orders of magnitude more complex than and function fundamentally differently from current models, that sort of thing might not strictly be absolutely impossible.

I am talking about the current state of AI. Right now, it is equivalent to asking, "what if my toaster turns against humanity, and corrupts everyone else's toasters?" Also, "what if my toaster turns against humanity? how do we unplug everyone else's toasters?"

Current models have no meaningful internal state that changes over time. They have no capacity to "develop" anything.

Current models can operate on foreign language inputs and produce foreign language outputs because there was foreign language content in the training data used to produce the model. Anyone who speaks a foreign language fluently and uses it to interact with one of these models will tell you the models perform MUCH more poorly in foreign languages than in English - because there was relatively little foreign language content in the training set. ChatGPT did not "just [learn] it on its own," it is a COMPLETELY OBVIOUS AND EXPECTABLE consequence of how these models are produced.

ChatGPT has gotten better over time because they have replaced the underlying model as they improve it, not because it is learning anything.
 
Posts: 6319 | Location: CA | Registered: January 24, 2011Reply With QuoteReport This Post
thin skin can't win
Picture of Georgeair
posted Hide Post
quote:
Originally posted by sigspecops:
I'm not worried about self aware, evil AI destroying humanity.


So here's the thing. General AI doesn't have to be evil. In a simple sense, if you confine its objectives (somehow) to a set of parameters you have to be pretty sure you set those correctly.

For example, if the prime objective is "make as many paper clips as possible", then an effective instance would redirect every resource in vicinity, planet, solar system and universe to just that. Not great for giraffes, trees or humans.

The bigger challenge is even IF you start out with a single objective, true General AI will likely evolve from that to other objectives, no evil required. What would you hope for the derived objectives to be? Emulate humanity? Nope, back to that good and evil thing. Do only things which serve your human underlings? Well technically the MATRIX was doing that.

I've listened to and read a fair bit about this and many of the arguments and concerns are hard to dismiss with a view to development far beyond anything we have now, let alone can conceive of. It doesn't lend itself to soundbites and thought snippets if you are really thoughtful about the possibilities and challenges.



You only have integrity once. - imprezaguy02

 
Posts: 12417 | Location: Madison, MS | Registered: December 10, 2007Reply With QuoteReport This Post
delicately calloused
Picture of darthfuster
posted Hide Post
If AI lies to you, how will you know? If AI lies to your fellow man and he believes it, what will you do?



You’re a lying dog-faced pony soldier
 
Posts: 29696 | Location: Highland, Ut. | Registered: May 07, 2008Reply With QuoteReport This Post
Member
posted Hide Post
Georgeair ^^^
Of course, I know AI isn't "evil", it's hyperbole. One of the many times when nuances aren't communicated effectively.


No one's life, liberty or property is safe while the legislature is in session.- Mark Twain
 
Posts: 3533 | Location: TX | Registered: October 08, 2005Reply With QuoteReport This Post
  Powered by Social Strata Page 1 2 3 4 5 6 7  
 

SIGforum.com    Main Page  Hop To Forum Categories  The Lounge    Black Mirror AI - we won't be able to trust this stuff

© SIGforum 2024