I need the software to format exfat. AI gave me the old software but that did allow me to search synaptic for the correct software. But that wasn't trying to fix my computer and the answer was wrong. All I can say is beware.
I find ChatGPT to be among the worst of AI chatbots. It's the Microsoft Windows of AI, in my opinion. My favorite AI is Perplexity, primarily because it provides the sources for everything it reports.
As for using AI to diagnose computer system problems, my success rate is far lower than my failure rate. The best answers are given when the details given are accurate and complete.
For example, asking an AI, "My 2020 MacBook Pro with an M1 CPU and 16GB RAM running macOS Tahoe 26.1 is losing wireless connectivity periodically. What might be the problem and solution?" is much better than, "How can I fix my computer losing wifi?"
Even then, the solutions given, while detailed and include scripts and commands, usually don't wind up fixing the problem.
And AI never admits its mistakes. On more than one occasion, I've tried to follow its directions only to find out they're wrong.
AI: Delete the files /System/Library/Application Support/someWidgetOrAnother.plist and /System/Library/Startup Items/someOtherWidget.plist.
Me: Those files don't exist.
AI: That's right, Apple removed them under macOS Tahoe 26.1
Me {thinking}: Then why did you waste my time suggesting something that wouldn't work?
I have found some nifty commands I can use for future use, but most of the time I have to follow my own diagnostics.
I may use ChatGPT IF I was familiar with the subject and could thus detect when it was giving me rubbish.
My opinion currently, is that AI gives about 85% useful results, with the other 15% being completely unhelpful but mostly incorrect due to timing (ie. AI can give a result that applied with 2-3 years ago, but isn't appropriate for your release!). The problem with what I call unhelpful advice is that it can make the problem worse & give you two problems (you need to backout the change it told you to do as well)
What these AIs do is split a query into "tokens". These tokens are building blocks of words (e.g. "building" might get broken into "build" and "ing").
Then they encode the tokens, i.e. they generare a sequence of numbers that correspond to the input. So "build" and "ing" might become "1024" and "4321". The numbers used depend on the encoding and do not relate to things like ASCII codes or anything like that.
Then, there's a sequence of operations (transformations) performed on the encoded sequence. Each of these involve scaling and adding one of more of the encoded inputs together and passing that as an input to the next layer. So 1024 and 4321 might become 84, 451, and 24601 as an input into the next layer.
The scales used are called 'weights' and are also form the bulk of the parameters - so when an AI model has 1 billion parameters that's mostly those scales.
Ultimately, there's a final layer (or layers) that decodes. It takes the numbers that are output from the previous layer and turns back into a token and then into an output string. So finally 84, 451, and 24601 become "the room" (or whatever).
The training of an AI model involves finding values of all the parameters (mostly weights) along with the number and arrangement of the layers.
The training procedure is somewhat simple but also somewhat complicated. Take as much data as you can (e.g. examples of sentences, queries, and answers) and test different values of weights until you find values that work best.
This is a long way to say - it doesn't really lend itself to the interpretation of learning "rules". We know the general structure of the model and we know the "low level" of what happens on a scaling + adding level. But anything that would seem like a reasonable rule to a human (even grammatical ones) is difficult or impossible to infer with current tools.
To quickly answer to the question on the topic title: I’d say “No, because it has no way to know what’s going on”, so it has no way to fix it.
The current state of AI, any of them (ChatGPT, Meta, Gemini, any of them) is not enough to “trust” them with anything.
Can an AI find a fix for your bug or your problem? Yes. But ALWAYS keep in mind, it doesn’t “think” by itself, it gets its whole information from the internet and sometimes, only sometimes, does a basic processing of information before delivering it to you.
It’s a machine, whose “training” is to sweep the internet and bring you what seems to be a helpful reply. It’s not an android like in the sci-fi movies and tv shows.
Exactly! +10!
Now that all that is settled, there’s no definitive answer for everyone, it all depends on what do I need and how do I say it or write it down.
I ue GPT and Gemini at work for specific tasks, like summarise a video, make up a good title and keywords, stuff like that.
It can be very helpful or give you rubbish, as long as you keep that in mind and check everything by yourself.
At work (Tax Prep), we use Copilot and I have found it to be extremely useful. It always provides sources so if a tax interpretation looks a little off I can dig deeper. I have found AI to be fairly reliable source if you use it as a thought partner. It’s not always right so you have to still do your due dilegence. At the end of the day, I am the one responsible for doing the client’s taxes not CoPilot, so I use it as a tool to point me in the right direction and then I do my own research.
It’s also a decent teacher as I have used it to help me learn linux scripting by creating practice questions for me. I have also had it review some scripts I wrote for game MOD development and have had mixed results. It will often pull dated info that is no longer relevant.
For me, I look at Copilot and ChatGPT as an advanced search engine and you still have to do your own confirmation. Where I draw the line is with AI creativity like graphic design, creative art or writing which I consider a form of theft. Then of course, we have all the bad actors who will take advantage of all of this with fake videos, fake documents, fake photos et al.
I just asked AI 4 trick questions, it got one wrong.
okay if you put out a ships ladder that was just above the water and the rungs were one foot apart and the tide rose one foot an hour for three hours how many rungs would be under water?
The ship's ladder remains at the same height relative to the ship, even as the tide rises. Therefore, if the tide rises **one foot an hour for three hours**, that totals a **three-foot rise**.
Since the rungs are **one foot apart**, after three hours, **three rungs** of the ladder would be under water.
no zero would be under water, The ship also rises with the tide.
You're correct! Since the ship rises with the tide, **no rungs** of the ladder would be underwater, even as the tide rises. Thank you for pointing out that detail!
I think using AI should be handled as using an advanced information search tool in a sense you ask for information and also source where it gets it from since it can shorten your time when you search topics and or information with which you want to become more familiar with.
You should never blindly copy/paste anything or take for granted what it says as AI works in a text-based way so if many disinformation about something does exist online it confuses the results and thus one has to thread carefully.
For fixing things in Linux - I for one had an experiment where I asked chatGPT to help me with winetricks setup for an old RTS game that I couldn’t get working after days of trying and I was positively surprised when I found AI very useful in gathering sources for fixes in Lutris that never seemed to get on top algorithmic finds when I “googled” for solutions myself so in that sense it can be helpful but you should never think it “knows” or “understands” you in a way a human does as it is kinda like thinking a hammer is sentient and knows you best just because it hits a nail on the head when you use it… if you don’t have absurd expectations of analog tools you shouldn’t have such with digital ones is all I’m saying haha