Has anyone used AI to aid in repair?

I have used AI as an assist - but not on the actual repairs. I've used it when I'm trying to figure out how the code works.

Back 40 years ago when I was fluent in 6502 assembly language - sure - I could read the source code. Now I'm not as confident in the exact operations and I'm trying to figure why something is not working as I expect, not 'relearn' 6502 assembler.

Cut and paste the source into ChatGPT and it helped clarify some of the things I was not certain about in the code.
I consider this repair adjacent.
 
Just don't ask an AI for the pinout of an 27C322 eprom. :ROFLMAO: Such tools are dangerous for the average user.
 
Just don't ask an AI for the pinout of an 27C322 eprom. :ROFLMAO: Such tools are dangerous for the average user.
You can ask - just be prepared for a wrong answer.

I've used AI a few times, and every time, and I mean EVERY TIME, I've been disappointed.

It mines from places like this which are pretty solid, but also some of the less technically correct groups like Facebook, so you get a mish-mash of information, and if you can't tell what is right, you can let the magic smoke out pretty fast.
 
some of the less technically correct groups like Facebook,
According to Open AI, they do not pull data from private sources. Facebook's content is largely private or restricted, and scraping its data would violate its terms of service.

ChatGPT (allegedly) only pulls from publicly available sources. So unless the Facebook content is made public, then ChatGPT can't see it.
 
You can ask - just be prepared for a wrong answer.

I've used AI a few times, and every time, and I mean EVERY TIME, I've been disappointed.

It mines from places like this which are pretty solid, but also some of the less technically correct groups like Facebook, so you get a mish-mash of information, and if you can't tell what is right, you can let the magic smoke out pretty fast.
If one has no idee what is he doing, he just takes the answer as true. No matter what the subject is. Not good at all.
 
If one has no idee what is he doing, he just takes the answer as true. No matter what the subject is. Not good at all.
That is the challenge.

For people who are new to the hobby, there is a LOT of information out there.

Some of it comes from reputable sources.

Some of it doesn't.

Unless you can parse between the reputable and the non-reputable, you could end up in a bad place.
 
According to Open AI, they do not pull data from private sources. Facebook's content is largely private or restricted, and scraping its data would violate its terms of service.

ChatGPT (allegedly) only pulls from publicly available sources. So unless the Facebook content is made public, then ChatGPT can't see it.
So they say.

Then again, Google can find information in our DMs. So I guess that they have that going for it.
 
If one has no idee what is he doing, he just takes the answer as true. No matter what the subject is. Not good at all.
Well yea...this concept has always been true and will always be true. Misinformation or bad answers have been around since the dawn of time. The only thing that has changes is how easy it is for the end-user to obtain it.

It's still your job as the end-user to filter it all out and use your brain to make an educated decision if the answer you are getting is accurate. Always cross check and validate. It's hard to do but everyone should be living a "zero-trust" lifestyle. Trust nothing and question everything.

IMO, this whole thing with AI is just a big experiment. A giant test to see how much they can make the public trust something. They just want to see how much of your own trust are you willing to hand over. How much of your own thinking are you going to trust to a machine in the cloud?

In other words, the amount of AI we see infesting our world will be directly proportional to the amount of trust we hand over to it.

Don't be the person that drives your car off a cliff because your GPS said so.
 
AI is really good at some things, but I guess the pool of information for it to draw from for arcade and pin repair isn't very good. I tried it a few times recently with various pinball problems, and it was wrong a lot.
 
My wife asked "is Spinal Tap 2 still in theaters?"

AI's FAIL answer:
There is no such movie (wrong).
 
Even worse than simply being wrong on important details frequently, all of those models apparently have been instructed to just make shit up rather than respond that they really don't know something.

If those ignorant things don't poll various posted answers to a query and compare results before they spit out a reply then they are as worthless as the single source media "fact checkers" rampant these days.

Another indication that in spite of numerous anecdotal benefits, the net result of the internet will be the destruction of civil society.
 
I tried to use my hammer to pound in a screw. It didn't work right. Hammers are useless.
I fully understand your contrasting comments here.

I see the problem as one of a lack of knowledge.

Someone like you could look at an AI generated response, and understand if it is right or wrong.

Now I want you to go back to when you were new to this. You no longer have the knowledge you have gained through years of work.

Now ask the same question. Will you get the same outcome?

Others have noted AI is an "aggregation" tool - it strips content and assembles what it THINKS is an answer.

Facebook groups (which we pretty much all make fun of) is a source of content and misinformation (as we pretty much have all confirmed).

Without the knowledge to see what could be bad advice, a poor outcome could occur.

Does that mean AI has no value? No.

Just not so much value in this (and other) cases.
 
Back
Top Bottom