do you use free AI tools?

I don't use it but I have over the course of the last 3 years in my MBA copied and pasted homework and exam questions into AI just to see what it spits out. If the question has any math in it, unless it's a really simple question that you wouldn't need help with anyway, it's almost always wrong but can give a seemingly convincing mathematical rationale, if you don't really have a clue what's going on. If you have a clue then you have to evaluate every single thing it spits out to figure out why it's wrong, and if you can do this, then you can just answer the question without AI. So I don't trust anything is spits out. My corporate finance class was probably the worst since it was word problem-based and the math could have lots of steps and AI failed miserably.
 
This tech is only as good as the model it used to train on.
Well, that's true of just about any software that does any real work. AI and LLM are in their relative infancy. Going forward, this is a game changer.

I say I love the tech, because it was my career; it was fascinating. Embracing the tech, aka change, turned out to be very rewarding, in so many ways. I miss it.

In my career, my mantras were:
  • Computers should be information giving machines, not information asking machines.
  • The only important thing is the business need being addressed.
  • IT can contribute to the bottom line if it is aligned with corporate goals.
Perhaps consider Sabine's thoughts... I understand my thoughts may fly in the face of other's.
 
Last edited:
Well, that's true of just about any software that does any real work. AI and LLM are in their relative infancy. Going forward, this is a game changer.

I say I love the tech, because it was my career; it was fascinating. Embracing the tech turned out to be very rewarding, in so many ways. I miss it.

In my career, my mantras were:
  • Computers should be information giving machines, not information asking machines.
  • The only important thing is the business need being addressed.
  • IT can contribute to the bottom line if it is aligned with corporate goals.
I'd add to these :giggle:

- At the end of the day, the only business need is to do more work with less or zero mouths to pay

- Corporate goals align with getting rid of IT as well, eventually, as they are substantial mouths to pay.

- As savvy ITs cruise on a yacht built from the skulls and bones of less savvy & more junior ITs, "Watchmen - Tales of the Black Freighter" style - the pyramid's top will keep thinning up, and more and more will end up weaved in said yacht's structure. "There can be only one!" works in movies, a bit more concerning in real life.
 
Well, that's true of just about any software that does any real work. AI and LLM are in their relative infancy. Going forward, this is a game changer.

I say I love the tech, because it was my career; it was fascinating. Embracing the tech, aka change, turned out to be very rewarding, in so many ways. I miss it.

In my career, my mantras were:
  • Computers should be information giving machines, not information asking machines.
  • The only important thing is the business need being addressed.
  • IT can contribute to the bottom line if it is aligned with corporate goals.
Perhaps consider Sabine's thoughts... I understand my thoughts may fly in the face of other's.
I have no issue with AI. I think its going to improve / decimate at the same time certain segments of our modern world. I do not fear it. There will not, IMHO, be any network affect - like google for example. Corporates will want their own model, with their own data set. It will be proprietary data. Because garbage in garbage out.

The Group data out there for free now is a mess. Some very accurate, some patently wrong. How do you know? You can't - so its not useful. However lets say the NIH loads an AI model with all the actual health studies they have - and makes it available to professionals. Immediately it becomes super useful. So I am sure thats the route we will go. Not one giant AI - at least initially.

So back to "do you use free AI tools". My answer is no, because the ones I have tried are garbage. I always search something I know professionally, to see what it gives me, and its mostly junk. Maybe in the future?
 
I use it, then think about the answer that it gave me. Is the answer rational, do I need to refine my question to get a better answer? I find the AI answer is blatantly wrong about 25% of the time if you are asking a complex question. The end user still needs to think.

One area AI excels is writing code. I have had reasonably complex tasks that I needed Python, SQL or Shell script. I am not a programmer, but I do have some programming experience. AI gets the scripts correct the first time about 50% of the time. Every other time I have shown it how I want the script improved and it has given me a correctly working script 100% of the time. It's quite impressive.
This. I've seen it give complete nonsense answers on energy stuff countless times.

On the other hand, as far as programming goes, I barely know C++ and Java (took them 25 years ago), so when I've been writing stuff in JS or Python, I've found ChatGPT does a decent enough job giving you a framework that it saves a huge amount of time that I would otherwise spend bumbling my way through. I have also discovered that it has improved significantly in the last year, as I had to make some changes to one of my JS scripts (which I wrote with its help originally, but with some considerable modification, because it didn't work), and not only did it not screw anything up this time, but it also helped me make it "future proof" in terms of accommodating additional data streams in the information being pulled in.
 
Back
Top Bottom