Question to the admins about an AI project

Skynet is almost here....

1_z8hntxcbTMH4ODAFOuEm0A.gif
Fuck me. I don't know if that's what Ken Layton would recommend. But it's definitely what andrewb has recommended (as well as other people here). Every one of those things, and pretty much in that order. And with sending it out always being the last option.

Just look at any thread where I've used the phrase 'visual inspection'. Here are all of them:



Sadly, you used to be able to find much of this information via a basic google search. But Google and others are intentionally kneecapping their own search now (which get more terrible by the day), so they can justify these newer AI-based solutions.

And while it seems impressive on the surface, I can't help but keep asking myself, 'Is this REALLY that much better than a good Google search was ten years ago?' And the answer I keep coming back to is not really. It just seems better today, because search and the web in general is such shit.

It seems like most of the world just got lazy and stopped googling things years ago, in favor of 'asking your friends' on Facebook (which we've DEFINITELY seen in this hobby), regardless of whether the information is good or not. And these new AI 'solutions' are just a sort of response to that, where it's the same information you could have googled once upon a time, but in a more conversational form, and lazier. Because people will do whatever involves the least effort. Even if it sucks.

I still would rather have the Google of ten years ago, where the human was still an active part of the search process (and I could make up my own mind based on what info I found, from multiple sources), rather than being a passive thing that just accepts whatever answer is given.

The latter is not a good thing.
 
The real concern is what will @andrewb do with all his free time when AI takes over anyone's need to search for things. Wont be a need to remind people how to do it.


As long as the AI is named andrewb, I'm fine with it.

You have to admit, if we did get to a point where there is an oracle that gives GOOD answers to tech questions, and people actually use it, that would be good for the hobby in a way. Or at least better than where we are now. (But not as good as 10-15 years ago).

As long as it wasn't telling people to buy repro parts they don't need, or install the sense mod, it could be useful. But any tool is only as smart as the person using it.
 
As long as the AI is named andrewb, I'm fine with it.

You have to admit, if we did get to a point where there is an oracle that gives GOOD answers to tech questions, and people actually use it, that would be good for the hobby in a way. Or at least better than where we are now. (But not as good as 10-15 years ago).

As long as it wasn't telling people to buy repro parts they don't need, or install the sense mod, it could be useful. But any tool is only as smart as the person using it.
The real non comical response to this is, While I think its a perfect world Hallmark Card sentiment, its not something that can happen.

Look at this place... There is really a small gathering of active people. Yet within that small group there are different ways, different ideas, different skill levels, and different levels of what someone perceives as important. So many variables, so many egos and resistance to change. Someone will do something different, the purist of purist will think its not a proper and bastardized short cut. What if in the AI utopia you elude to an idea other than yours is promoted as the proper way. The most time-efficient and cost-effective. Would you stand by and support it? Again, we can not figure out what a proper timeframe for service communication and repairs to happen in, or who owns what intellectual property items, I highly doubt anyone will stand by if their ideas are not voted #1 by AI.

Its a shame really. Guys like Key Layton, Ron Rich, and all the techs I have known in the industry who have departed over the past 30 years leave a void.

If it is infamy you seek, I suggest switching gears. Embrace Arcade 1ups and be the best service tech for that. At least you can ride that wave as long as the cardboard cabinet lasts! LOL Built in obsolescence is the true winner here.
 
If its tech related to the point it can readily find and summarize data in the same way google or this site search feature can serve up info, then yes. The big differentlce is the conversational inquiry and ability to provide relevant data based on source data. Obviously there are hallucination errors that can occur so its like anything else... someone on a message board tells you to just pop off the anode cap, youll be ok... you probably want to take it with a grain of salt.

But if you could dump your knowledge and experiences in the field into a searchable database, chances are it would be very beneficoal... and live on for people to access as this info and field dies off.

Im not 100% sold on AI but you have to admit, it has some solid potential.

There is a fundamental difference between search and generative AI.
 
If it is infamy you seek, I suggest switching gears. Embrace Arcade 1ups and be the best service tech for that.


If it were fame I were seeking, that would be a good route. For infamy, I think I'm doing just fine, lol. :)

It would be fun to train different LLM's using the text histories of specific individuals here. Then ask them each the same question, and let them argue it out.

A KLOV Fantasy WWE league, if you will. Pick your team today!
 
Re: applying LLM AI against the KLOV data...

I've had the same thought. Google search sucks; it's ad search now.

And regular text search here is too broad to find something specific.

Concerns about data quality could be handled if you didn't apply the model to every user's data set. And while repair would be nice, I could see the model being trained across areas such as restoration as well.

Interested in seeing where this goes...
 
You're really gonna trust LLMs with technical advice about working with HV??

This happens to be my particular technical area of expertise (computational linguistics, not high voltage haha) and I would definitely NOT trust it for something like that.

What me worry? 9 our of 10 hobbyists following AI's survive will likely live to tell about the experience. Now after turning the monitor on, do you connect the static wrist guard to the blue wire or red wire first?

Calm down... What could go wrong?!?

"For proper CRT operations, please lick the little suction cup that is attached to the tube. This insures a tight seal is made and there is no electron evaporation."

Damn that's funny.
 
Please bear in mind, LLMs model *language* -- they do NOT model the underlying thing or system being described. The output of an LLM is simply the highest probability sequence of words based on the input "prompt," with some post-process smoothing to make it more grammatically correct. But if you're looking for something that understands your question, understands the underlying system (e.g., monitor chassis, game PCB, etc.), and understands how to formulate an answer integrating the two, you're far better off asking an expert like @andrewb (or reviewing previous threads where he's answered similar questions in the past to help develop your own understanding), rather than blindly following the output of an LLM.
 
Fuck me. I don't know if that's what Ken Layton would recommend. But it's definitely what andrewb has recommended (as well as other people here). Every one of those things, and pretty much in that order. And with sending it out always being the last option.

Just look at any thread where I've used the phrase 'visual inspection'. Here are all of them:



Sadly, you used to be able to find much of this information via a basic google search. But Google and others are intentionally kneecapping their own search now (which get more terrible by the day), so they can justify these newer AI-based solutions.

And while it seems impressive on the surface, I can't help but keep asking myself, 'Is this REALLY that much better than a good Google search was ten years ago?' And the answer I keep coming back to is not really. It just seems better today, because search and the web in general is such shit.

It seems like most of the world just got lazy and stopped googling things years ago, in favor of 'asking your friends' on Facebook (which we've DEFINITELY seen in this hobby), regardless of whether the information is good or not. And these new AI 'solutions' are just a sort of response to that, where it's the same information you could have googled once upon a time, but in a more conversational form, and lazier. Because people will do whatever involves the least effort. Even if it sucks.

I still would rather have the Google of ten years ago, where the human was still an active part of the search process (and I could make up my own mind based on what info I found, from multiple sources), rather than being a passive thing that just accepts whatever answer is given.

The latter is not a good thing.
lol, I'm glad you are in alignment with the response presented. Maybe you can help me on a couple of the points described. The first bullet after 3 states... "Clean the edge connectors on the PCB", I can't seem to find the edge connectors on my Defender PCB maybe you can point me to them.

Secondly,
I "checked for continuity" on ALL the caps on my board and not a single one seems to have continuity, do I need to replace them all or is there something wrong with my meter?

The biggest issue I have, aside from the non-sense answers provided, is the idea that AI will properly interpret my poor language and/or descriptions I write trying to help people with very context specific problems and providing similar non-sense answers to people while representing they are "my" answers. I don't need any additional help looking stupid, I make enough silly mistakes myself.
 
lol, I'm glad you are in alignment with the response presented. Maybe you can help me on a couple of the points described. The first bullet after 3 states... "Clean the edge connectors on the PCB", I can't seem to find the edge connectors on my Defender PCB maybe you can point me to them.

Secondly,
I "checked for continuity" on ALL the caps on my board and not a single one seems to have continuity, do I need to replace them all or is there something wrong with my meter?

The biggest issue I have, aside from the non-sense answers provided, is the idea that AI will properly interpret my poor language and/or descriptions I write trying to help people with very context specific problems and providing similar non-sense answers to people while representing they are "my" answers. I don't need any additional help looking stupid, I make enough silly mistakes myself.


This is an interesting phenomenon that I've been observing more and more in public commentary.

People jumping on AI responses, picking apart replies, saying, "Hey look! It's wrong about THAT little detail! What a piece of garbage. These things will never be as smart as people! Look how dumb it is!"

And while all of these systems DO respond with complete and utter bullshit some percentage of the time, you have to give them credit for how accurate a lot of the responses are a lot of the time. And it's not pure chance. The fact that Greg's Ken Layton question above associated him with monitors is as funny as it is relevant, even if it wasn't the best answer a human could possibly give.

I call it the 'moving goalpost Turing Test'. People used to use 'Turing Test' as a milestone where a human interlocutor wouldn't be able to tell if it was talking to a machine or another human. But we've already reached a point where a non-trivial percentage of the population can be fooled into not knowing the difference, a non-trivial amount of the time. (And those numbers literally increase every day.) So it isn't a black-and-white thing. It's a continuum. And the slider is moving every day.

Also, the fact that these systems have also COMPLETELY mastered linguistic structure, spelling, and grammar goes completely unmentioned. They don't make spelling, grammar, or structural mistakes. EVER.

Now what you hear is, "Well, a REAL Turing Test would involve a subject matter *expert* talking to an AI, and trying to tell the difference." And what happens when it starts fooling experts (e.g., the fact that models are now passing medical and bar exams)? "Oh, well it'll have to fool a TEAM of experts."

The goalposts keep moving. That's not to say the technology is anywhere near perfect, nor that I'm advocating or defending it. But people seem so focused on what it CAN'T do, that they're not acknowledging what it can.

In this case yes, you can interpret 'edge connector' and 'continuity' literally, and jump all over that response. But the nits are getting pickier.

Functionally, an edge connector can be a header, ribbon connector, or whatever connects the 'edge' of the board to adjacent things. Most boards have finger-based edge connectors. Defender is one exception. Ok, fine. But conceptually is that any worse a mistake than a lot of *humans* would make in other contexts?

It's also not TOO far of a stretch to interpret 'checking components for continuity' to mean checking continuity TO and FROM those parts. After all, that comment is under the 'Check for Broken Traces or Components'. That actually feels like something *I've* actually said here, as I know I often say, 'check everything for continuity', 'use your DMM, not your eyes', etc.

Yes, technically you are correct. But those things aren't THAT wrong. And you have to admit, the list overall is pretty accurate otherwise.

Not to mention, the act of pointing out the mistakes here (or any other time anyone interacts with an LLM), only serves to make the system's answers more accurate as time goes on. So the more we publicly bash these things for the things they DO get wrong, the more we just give them exactly what they need to make sure it happens less frequently in the future. We're the suckers in that deal IMO, and the tech companies know and leverage it. We KNOW everything we type publicly is being hoovered up by any system that can access and/or pay for it.

So in the end, IMO it isn't so much where these models stand right this minute, but rather where the trend is pointing. Then the only variable left is time.
 
It'd be useless to train an AI using all the garbage data on this site.

The lack of technical moderation to remove / demote wrong answers here has been a problem for decades.
 
This is an interesting phenomenon that I've been observing more and more in public commentary.

People jumping on AI responses, picking apart replies, saying, "Hey look! It's wrong about THAT little detail! What a piece of garbage. These things will never be as smart as people! Look how dumb it is!"

And while all of these systems DO respond with complete and utter bullshit some percentage of the time, you have to give them credit for how accurate a lot of the responses are a lot of the time. And it's not pure chance. The fact that Greg's Ken Layton question above associated him with monitors is as funny as it is relevant, even if it wasn't the best answer a human could possibly give.

I call it the 'moving goalpost Turing Test'. People used to use 'Turing Test' as a milestone where a human interlocutor wouldn't be able to tell if it was talking to a machine or another human. But we've already reached a point where a non-trivial percentage of the population can be fooled into not knowing the difference, a non-trivial amount of the time. (And those numbers literally increase every day.) So it isn't a black-and-white thing. It's a continuum. And the slider is moving every day.

Also, the fact that these systems have also COMPLETELY mastered linguistic structure, spelling, and grammar goes completely unmentioned. They don't make spelling, grammar, or structural mistakes. EVER.

Now what you hear is, "Well, a REAL Turing Test would involve a subject matter *expert* talking to an AI, and trying to tell the difference." And what happens when it starts fooling experts (e.g., the fact that models are now passing medical and bar exams)? "Oh, well it'll have to fool a TEAM of experts."

The goalposts keep moving. That's not to say the technology is anywhere near perfect, nor that I'm advocating or defending it. But people seem so focused on what it CAN'T do, that they're not acknowledging what it can.

In this case yes, you can interpret 'edge connector' and 'continuity' literally, and jump all over that response. But the nits are getting pickier.

Functionally, an edge connector can be a header, ribbon connector, or whatever connects the 'edge' of the board to adjacent things. Most boards have finger-based edge connectors. Defender is one exception. Ok, fine. But conceptually is that any worse a mistake than a lot of *humans* would make in other contexts?

It's also not TOO far of a stretch to interpret 'checking components for continuity' to mean checking continuity TO and FROM those parts. After all, that comment is under the 'Check for Broken Traces or Components'. That actually feels like something *I've* actually said here, as I know I often say, 'check everything for continuity', 'use your DMM, not your eyes', etc.

Yes, technically you are correct. But those things aren't THAT wrong. And you have to admit, the list overall is pretty accurate otherwise.

Not to mention, the act of pointing out the mistakes here (or any other time anyone interacts with an LLM), only serves to make the system's answers more accurate as time goes on. So the more we publicly bash these things for the things they DO get wrong, the more we just give them exactly what they need to make sure it happens less frequently in the future. We're the suckers in that deal IMO, and the tech companies know and leverage it. We KNOW everything we type publicly is being hoovered up by any system that can access and/or pay for it.

So in the end, IMO it isn't so much where these models stand right this minute, but rather where the trend is pointing. Then the only variable left is time.

LLMs are more like Searle's "Chinese room" thought experiment than the Turing test. A fundamentally important point here is that LLMs are completely unable to reason about the question or the subject at hand, as they do not rely on any kind of model of the underlying reality. As such, they cannot answer novel logic questions that any 4 year old human could easily answer. No inductive reasoning, no deductive reasoning, no nothing. Just fancy symbol manipulation. Which can still be useful in certain contexts, but do not trust it for advice on working with HV!
 
It'd be useless to train an AI using all the garbage data on this site.

The lack of technical moderation to remove / demote wrong answers here has been a problem for decades.

Mark's statement is a reflection of the models training on scraping the 'net as a whole. Garbage in, garbage out. Since its nearly impossible to "tweak" the model, how does one filter out incorrect/bad/undesirable information? Answer is: we can't at scale.

Scale is the answer to all questions in this arena!
 
A fundamentally important point here is that LLMs are completely unable to reason about the question or the subject at hand, as they do not rely on any kind of model of the underlying reality.


That's rapidly changing though.

And that's also why everyone is scrambling to build AI robots. Integrating multi-modal information about the world, beyond just text, is the next level of machine learning, and aims to build on (and fill in the gaps) with today's models.

Look up Yann LeCun's lectures on what he's doing at Meta.
 
Back
Top Bottom