lol, I'm glad you are in alignment with the response presented. Maybe you can help me on a couple of the points described. The first bullet after 3 states... "Clean the edge connectors on the PCB", I can't seem to find the edge connectors on my Defender PCB maybe you can point me to them.
Secondly,
I "checked for continuity" on ALL the caps on my board and not a single one seems to have continuity, do I need to replace them all or is there something wrong with my meter?
The biggest issue I have, aside from the non-sense answers provided, is the idea that AI will properly interpret my poor language and/or descriptions I write trying to help people with very context specific problems and providing similar non-sense answers to people while representing they are "my" answers. I don't need any additional help looking stupid, I make enough silly mistakes myself.
This is an interesting phenomenon that I've been observing more and more in public commentary.
People jumping on AI responses, picking apart replies, saying, "Hey look! It's wrong about THAT little detail! What a piece of garbage. These things will never be as smart as people! Look how dumb it is!"
And while all of these systems DO respond with complete and utter bullshit some percentage of the time, you have to give them credit for how accurate a lot of the responses are a lot of the time. And it's not pure chance. The fact that Greg's Ken Layton question above associated him with monitors is as funny as it is relevant, even if it wasn't the best answer a human could possibly give.
I call it the 'moving goalpost Turing Test'. People used to use 'Turing Test' as a milestone where a human interlocutor wouldn't be able to tell if it was talking to a machine or another human. But we've already reached a point where a non-trivial percentage of the population can be fooled into not knowing the difference, a non-trivial amount of the time. (And those numbers literally increase every day.) So it isn't a black-and-white thing. It's a continuum. And the slider is moving every day.
Also, the fact that these systems have also COMPLETELY mastered linguistic structure, spelling, and grammar goes completely unmentioned. They don't make spelling, grammar, or structural mistakes. EVER.
Now what you hear is, "Well, a REAL Turing Test would involve a subject matter *expert* talking to an AI, and trying to tell the difference." And what happens when it starts fooling experts (e.g., the fact that models are now passing medical and bar exams)? "Oh, well it'll have to fool a TEAM of experts."
The goalposts keep moving. That's not to say the technology is anywhere near perfect, nor that I'm advocating or defending it. But people seem so focused on what it CAN'T do, that they're not acknowledging what it can.
In this case yes, you can interpret 'edge connector' and 'continuity' literally, and jump all over that response. But the nits are getting pickier.
Functionally, an edge connector can be a header, ribbon connector, or whatever connects the 'edge' of the board to adjacent things. Most boards have finger-based edge connectors. Defender is one exception. Ok, fine. But conceptually is that any worse a mistake than a lot of *humans* would make in other contexts?
It's also not TOO far of a stretch to interpret 'checking components for continuity' to mean checking continuity TO and FROM those parts. After all, that comment is under the 'Check for Broken Traces or Components'. That actually feels like something *I've* actually said here, as I know I often say, 'check everything for continuity', 'use your DMM, not your eyes', etc.
Yes, technically you are correct. But those things aren't THAT wrong. And you have to admit, the list overall is pretty accurate otherwise.
Not to mention, the act of pointing out the mistakes here (or any other time anyone interacts with an LLM), only serves to make the system's answers more accurate as time goes on. So the more we publicly bash these things for the things they DO get wrong, the more we just give them exactly what they need to make sure it happens less frequently in the future. We're the suckers in that deal IMO, and the tech companies know and leverage it. We KNOW everything we type publicly is being hoovered up by any system that can access and/or pay for it.
So in the end, IMO it isn't so much where these models stand right this minute, but rather where the trend is pointing. Then the only variable left is time.