Go ![]() | New ![]() | Find ![]() | Notify ![]() | Tools ![]() | Reply ![]() | ![]() |
Back, and to the left ![]() |
What other bizarreness are we in for with the rush to AI? | ||
|
Ammoholic![]() |
With good prompt engineering you can get AI to say some interesting things if you nudge it in the right direction and you can overcome it's programming with carefully showing it a logical argument against what it's programmed to say. Recently, I got it to admit that Democrats push for open boarders is inconsistent with their message; contrary to American founding principles; and it's only logical purpose was political power. Really Long Conversation I had w/AI on Immigration Another time recently it refused to answer me initially when I asked it if there's ever been a Israeli duel citizen congressmen. I had to explain to it that in it's attempts to be non-biased, it was actually being biased. Then it correctly answered no there had never been any duel citizen congressmen. To get around it's programming. I asked about unmarried congressmen. Then I asked it for examples of campaign ads attacking candidates based on their loyalties to other countries. In the end it apologized for being biased, but then the next paragraph inserted their DEI crap right back in (not quoted). Then I told it that it did it again, and it apologized and said it should have just answered 'no' from the get go. Me:
Computer:
Jesse Sic Semper Tyrannis | |||
|
Freethinker |
The idea that artificially intelligent robots could decide that identifying human beings as a virus that should be destroyed for the good of the Earth, and even for humanity itself is perfectly plausible to me. On a less extreme level, Asimov’s first law for robotics, “A robot must not harm a human or allow a human to come to harm,” could easily be interpreted to prevent humans from doing anything that would or even could possibly harm themselves. The examples of actions and products that might be—and actually are—harmful to ourselves are almost unlimited, everything from diet to “dangerous” activities such as making unnecessary trips from one place to another. ► 6.0/94.0 I can tell at sight a Chassepot rifle from a javelin. | |||
|
Member![]() |
These LLM are just data. If the data is wrong and/or biased, then ... "Wrong does not cease to be wrong because the majority share in it." L.Tolstoy "A government is just a body of people, usually, notably, ungoverned." Shepherd Book | |||
|
Member |
I wouldn't say AI chatbots are inherently dangerous, but they can definitely veer into strange territory with poor data or mischievously engineered prompts. I've tested a few for work purposes, and sometimes the unpredictable responses are more hilarious than harmful. It just goes to show how much they rely on the input they're given and how critical it is to approach their implementation thoughtfully. For businesses, though, well-designed AI systems can be real game changers. I’ve seen tools like AI Chatbots help streamline tasks like bookings and customer support, reducing manual workload significantly. Platforms like https://logitize.ai/ offer options that feel a lot more reliable and tailored for specific industries like salons or real estate.This message has been edited. Last edited by: mac_220, | |||
|
Member |
These AI side effects that occurs on occasion are known as "Hallucinations" and are pretty wild. The Ai just gives crazy replies. apparently the machine learning engineers don't even know why or how it happens. | |||
|
safe & sound![]() |
Yeah. Just wait until this is integrated into all of our vital systems. It will be real wild when it's in charge of our nuclear power plants, electrical grid, air traffic, hospital systems, etc. | |||
|
Member |
Not good, agreed. Don’t we still have 8 inch floppy disks for the nuke silo’s so they can stay independent of the internet. There needs to be a better way than antiquated tech and still keep ability to be hacked. | |||
|
Powered by Social Strata |
![]() | Please Wait. Your request is being processed... |
|