SIGforum
Manual Safety on military pistols

This topic can be found at:
https://sigforum.com/eve/forums/a/tpc/f/430601935/m/1140050515

May 04, 2025, 03:24 PM
12131
Manual Safety on military pistols
quote:
Originally posted by RogueJSK:
... This is a good example of the problem with just blindly copy/pasting whatever some AI chatbot says.

What we have right now is not actually Artificial Intelligence. They don't actually think on their own, analyzing queries and crafting reasoned answers based on the data. Instead, they're software scripts called Large Language Models that take a mishmash of what's been fed to them from scraping websites, shove it in a blender, and spew out whatever amalgamation results. Regardless of whether it's accurate or not. They have no ability to judge the accuracy/validity of their statements, or to understand the factors at play in the question. They merely check that the words are put together into a way that mimics human language.

They're effectively bullshit generators at this time, pulling answers out of their ass and confidently answering in a way that sounds good but is not necessarily accurate….

Man, nailed it! Here is an example right on here recently regarding the pinned front sight on the 1996 P229. Until proven otherwise, it’s a BS AI answer, afaic.


Q






May 09, 2025, 07:23 PM
1lowlife
quote:
Originally posted by RogueJSK:
... This is a good example of the problem with just blindly copy/pasting whatever some AI chatbot says.

What we have right now is not actually Artificial Intelligence. They don't actually think on their own, analyzing queries and crafting reasoned answers based on the data. Instead, they're software scripts called Large Language Models that take a mishmash of what's been fed to them from scraping websites, shove it in a blender, and spew out whatever amalgamation results. Regardless of whether it's accurate or not. They have no ability to judge the accuracy/validity of their statements, or to understand the factors at play in the question. They merely check that the words are put together into a way that mimics human language.

They're effectively bullshit generators at this time, pulling answers out of their ass and confidently answering in a way that sounds good but is not necessarily accurate. (Think of them like your annoying loudmouth know-it-all coworker who thinks they're a self-proclaimed expert on every subject but is really just faking it most of the time and hoping their confident recitation sways their audience into believing their BS.)

That Safariland website it's quoting as its source for confidently declaring that all those countries carrying with a round chambered makes ZERO mention of whether those countries carry chambered/not chambered.

It's completely mute on that subject.

Copilot made that up. It sounds good, it's stated confidently, it appears to make sense, it even has a reference footnote... but it's simply not accurate. The industry term is called a "hallucination", and it's very common with these so-called "AI" chatbots.

So stop relying on chatbots to answer your questions. It's quick and easy, sure. But the probability of whether you're actually going to get an accurate/truthful/non-bullshit response varies wildly.

It's more likely to be accurate than a true random answer generator like a Magic 8 Ball, but not yet accurate enough to trust. Maybe we'll get there someday after further refinement, or this may just a limitation that's intrinsic to this style of quasi-"AI" LLM, in which case we won't get a fully accurate AI until we manage to get to actual true AI that's able to think and reason.


For further reference, from https://www.psypost.org/schola...ng-its-bullshitting/

quote:
Scholars: AI isn’t “hallucinating” — it’s bullshitting

Large language models, such as OpenAI’s ChatGPT, have revolutionized the way artificial intelligence interacts with humans, producing text that often seems indistinguishable from human writing. Despite their impressive capabilities, these models are known for generating persistent inaccuracies, often referred to as “AI hallucinations.” However, in a paper published in Ethics and Information Technology, scholars Michael Townsen Hicks, James Humphries, and Joe Slater from the University of Glasgow argue that these inaccuracies are better understood as “bullshit.”

Large language models (LLMs) are sophisticated computer programs designed to generate human-like text. They achieve this by analyzing vast amounts of written material and using statistical techniques to predict the likelihood of a particular word appearing next in a sequence. This process enables them to produce coherent and contextually appropriate responses to a wide range of prompts.

Unlike human brains, which have a variety of goals and behaviors, LLMs have a singular objective: to generate text that closely resembles human language. This means their primary function is to replicate the patterns and structures of human speech and writing, not to understand or convey factual information.

The term “AI hallucination” is used to describe instances when an LLM like ChatGPT produces inaccurate or entirely fabricated information. This term suggests that the AI is experiencing a perceptual error, akin to a human seeing something that isn’t there. However, this metaphor is misleading, according to Hicks and his colleagues, because it implies that the AI has a perspective or an intent to perceive and convey truth, which it does not.

To better understand why these inaccuracies might be better described as bullshit, it is helpful to look at the concept of bullshit as defined by philosopher Harry Frankfurt. In his seminal work, Frankfurt distinguishes bullshit from lying. A liar, according to Frankfurt, knows the truth but deliberately chooses to say something false. In contrast, a bullshitter is indifferent to the truth. The bullshitter’s primary concern is not whether what they are saying is true or false but whether it serves their purpose, often to impress or persuade.

Frankfurt’s concept highlights that bullshit is characterized by a disregard for the truth. The bullshitter does not care about the accuracy of their statements, only that they appear convincing or fit a particular narrative.

The scholars argue that the output of LLMs like ChatGPT fits Frankfurt’s definition of bullshit better than the concept of hallucination. These models do not have an understanding of truth or falsity; they generate text based on patterns in the data they have been trained on, without any intrinsic concern for accuracy. This makes them akin to bullshitters — they produce statements that can sound plausible without any grounding in factual reality.

The distinction is significant because it influences how we understand and address the inaccuracies produced by these models. If we think of these inaccuracies as hallucinations, we might believe that the AI is trying and failing to convey truthful information.

But AI models like ChatGPT do not have beliefs, intentions, or understanding, Hicks and his colleagues explained. They operate purely on statistical patterns derived from their training data.

When they produce incorrect information, it is not due to a deliberate intent to deceive (as in lying) or a faulty perception (as in hallucinating). Rather, it is because they are designed to create text that looks and sounds right without any intrinsic mechanism for ensuring factual accuracy.

“Investors, policymakers, and members of the general public make decisions on how to treat these machines and how to react to them based not on a deep technical understanding of how they work, but on the often metaphorical way in which their abilities and function are communicated,” Hicks and his colleagues concluded. “Calling their mistakes ‘hallucinations’ isn’t harmless: it lends itself to the confusion that the machines are in some way misperceiving but are nonetheless trying to convey something that they believe or have perceived.”

“This, as we’ve argued, is the wrong metaphor. The machines are not trying to communicate something they believe or perceive. Their inaccuracy is not due to misperception or hallucination. As we have pointed out, they are not trying to convey information at all. They are bullshitting.”

“Calling chatbot inaccuracies ‘hallucinations’ feeds in to overblown hype about their abilities among technology cheerleaders, and could lead to unnecessary consternation among the general public. It also suggests solutions to the inaccuracy problems which might not work, and could lead to misguided efforts at AI alignment amongst specialists,” the scholars wrote.

“It can also lead to the wrong attitude towards the machine when it gets things right: the inaccuracies show that it is bullshitting, even when it’s right. Calling these inaccuracies ‘bullshit’ rather than ‘hallucinations’ isn’t just more accurate (as we’ve argued); it’s good science and technology communication in an area that sorely needs it.”

OpenAI, for its part, has said that improving the factual accuracy of ChatGPT is a key goal.

“Improving factual accuracy is a significant focus for OpenAI and many other AI developers, and we’re making progress,” the company wrote in a 2023 blog post. “By leveraging user feedback on ChatGPT outputs that were flagged as incorrect as a main source of data—we have improved the factual accuracy of GPT-4. GPT-4 is 40% more likely to produce factual content than GPT-3.5.”

“When users sign up to use the tool, we strive to be as transparent as possible that ChatGPT may not always be accurate. However, we recognize that there is much more work to do to further reduce the likelihood of hallucinations and to educate the public on the current limitations of these AI tools.”


Thank you...
May 09, 2025, 08:38 PM
jljones
Can we by chance start a different thread to discuss the merits of AI if you guys want to keep discussing it?




www.opspectraining.com

"It's a bold strategy, Cotton. Let's see if it works out for them"



May 10, 2025, 12:02 PM
V-Tail
quote:
Originally posted by 92fstech:

The entire Glock platform, which is one of the most popular military and law enforcement pistols in the world, does not have a manual safety.
It is my understanding that Glock models 17, 19, and 22 were produced in small quantities as 'S' models, and were used by some police departments in Taiwan and Australia, maybe other areas too.



הרחפת שלי מלאה בצלופחים
May 10, 2025, 12:53 PM
RogueJSK
quote:
Originally posted by V-Tail:
quote:
Originally posted by 92fstech:

The entire Glock platform, which is one of the most popular military and law enforcement pistols in the world, does not have a manual safety.
It is my understanding that Glock models 17, 19, and 22 were produced in small quantities as 'S' models, and were used by some police departments in Taiwan and Australia, maybe other areas too.


Correct. There have been a few production runs over the years of foreign police contract Glocks that had factory manual safeties.

Known users include:
Royal Thai Police
Taiwan National Police
Tasmania (Australia) State Police
Sao Paolo (Brazil) State Police

The Sao Paolo contract is by far the largest at 40,000 pistols, whereas the others varied from a few dozen to a few hundred.

These manual safeties were made in several different variants.

The first known Glocks with a factory safety were prototype Gen 1 G17s that Glock submitted to the Austrian Army pistol trials in 1982, before the Austrian Army decided they didn't need a manual safety and adopted the G17 without it:


The ones produced on contract in the 1990s for the Tasmanian Police had a similar but slightly taller thumb safety lever, seen here:


Later there was at least one contract with a crossbolt safety, for the Taiwanese Police:


And there was a slightly shorter/fatter thumb safety, seen here on a Thai Police pistol:


Glock eventually settled on a smaller thumb safety lever, seen here on a recent Sao Paolo Police pistol:


This last version was also seen on the G19MHS that was part of the US Army pistol trials, and later became the G19X (sans safety lever).

May 10, 2025, 02:02 PM
ruger357
quote:
Originally posted by TheNewbie:
triple post!… this is why you need a manual safety and no round chambered!


I heard the 320 still goes off! Wink
Sorry Jerry.


-----------------------------------------

Roll Tide!

Glock Certified Armorer
NRA Certified Firearms Instructor
May 10, 2025, 02:56 PM
18Z50
Personally, I like having the option of a manual safety on a military sidearm.

Most people in the US military are not gun people, it needs to be understood that majority of people who carry a SideArm are not combat troops, and they may only use the side arm in training once or twice a year.

In a stressful situation, I can see all kinds of chaos occurring.

That being said my military organization issues Glock 19’s as a standard side arm and my son and my brother prefer the Glock as a secondary weapon vs the 320 or M9.

18Z50
May 10, 2025, 03:21 PM
jljones
quote:
Originally posted by 18Z50:


In a stressful situation, I can see all kinds of chaos occurring.



Cops are no better trained and they do just fine without a manual safety. And they get in way more shootings than your average military member.




www.opspectraining.com

"It's a bold strategy, Cotton. Let's see if it works out for them"



May 10, 2025, 04:20 PM
RichardC
Israel

IMI Jericho 941

DA/SA with a frame mounted manual safety.

This message has been edited. Last edited by: RichardC,


____________________