Yea, well…for the heavy lifting it could be nice. But I’m not letting AI build my house.
Lifting heay crap to the roof or something like that? Sure. That is what machines are good at.
Welding? Well welding robots have existed for long time, they just need to programmed perfectly. I’ve worked with a couple of them, the results are not always consistent and they required some quality checks. It is easier. Manual welding takes more skill and takes longer. I just don’t need the AI part though. That makes it unpredictable. And if I let a robot do something, it should be predictable.
I do woodworking and fixing around the house. Even when building new stuff, there are always issues that you have to solve on the spot. Walls that are not straight, angles that are not perfect, spaces you cannot reach et cetera.
As long as AI does not get it 100% right every time it is not touching my house. And yes, a professional doesn’t reach that rate either, but at least they know and doublecheck themselves and know how to fix things.
Well, why did it not do it right the first time then? If the doublecheck gives a different result, then which is the right result? If I can ask the same question twice and I get two different answers, how I or the machine known which is the right answer? And if the machine knows, then why would it need to doublecheck? A machine can do it right the first time if it knows how, right?
As long as AI does not get it 100% right every time it is not touching my house. And yes, a professional doesn’t reach that rate either, but at least they know and doublecheck themselves and know how to fix things.
Well, why didn’t the human professional not do it right the first time then? If it’s okay for a human professional to make mistakes because they can double check and fix their mistakes, why is not okay for machines to do likewise?
Because a machine is expected to do it right the first time. Because it’s supposed to do the exact same thing everytime with the exact same input parameters. If you give it the exact same input every time and you get a different result every time it is not reliable to function as automation.
Humans are just that. Humans. They make mistakes sometimes. The reason humans can keep doing the work is that there is no better alternative. Machines can’t do it, so who else is gonna do it? Either humans build your house or nobody does. There is little choice there.
So if a machine is to take over that job, it better do it right and reliable and cheaper.Because humans can already do it right and reliable. And there’s little money saving if a human still needs to check all the work.
Because a machine is expected to do it right the first time.
No, it’s not. And it doesn’t have to because as I pointed out it can check its work.
You’ve got a mistaken impression of how AI works, and how machines in general work. They can make mistakes and can recognize and correct those mistakes. I’m a programmer, I have plenty of first hand experience. I’ve written code that does it myself.
So if a machine is to take over that job, it better do it right and reliable and cheaper.
The term “artificial intelligence” has been in use since the 1950s and it encompasses a wide range of fields in computer science. Machine learning is most definitely included under that umbrella.
Why do you think an AI can’t double check things and fix them when it notices problems? It’s a fairly straightforward process.
What are you trying to argue, that humans aren’t Turing-complete? Which would be an insane self-own. That we can decide the undecidable? That would prove you don’t know what you’re talking about, it’s called undecidable for a reason. Deciding an undecidable problem makes as much sense as a barber who shaves everyone who doesn’t shave themselves.
Aside from that why would you assume that checking results would, in general, involve solving the halting problem.
It has nothing to do with whether humans are Turing complete or not. No Turing machine is capable of solving an undecidable. But humans can solve undecidables. Machines cannot solve the problem the way a human would. So, no, humans are not machines.
This by definition limits the autonomy a machine can achieve. A human can predict when a task will cause a logic halt and prepare or adapt accordingly, a machine can’t. Unless intentionally limited by a programmer to stop being Turing complete and account for the undecidables before hand (thus with the help of the human). This is why machines suck at unpredictable or ambiguous task that humans fulfill effortlessly on the daily.
This is why a machine that adapts to the real world is so hard to make. This is why autonomous cars can only drive in pristine weather, on detailed premapped roads with really high maintenance, with a vast array of sensors. This is why robot factories are extremely controlled and regulated environments. This is why you have to rescue your roomba regularly. Operating on the biggest undecidable there is (e.g. future parameters of operations) is the biggest yet unsolved technological problem (next to sensor integration on world parametrization and modeling). Machine learning is a step towards it, in a several thousand miles long road yet to be traversed.
No, we can’t. Or, more precisely said: There is no version of your assertion which would be compatible with cause and effect, would be compatible with physics as we understand it.
Don’t blame me I didn’t do it. The universe just is that way.
Yet we live in a world where millions of humans assert their will over undecidables every day. Because we can make irrational decisions, logic be damned. Explain that one.
The halting problem is an abstract mathematical issue, in actual real-world scenarios it’s trivial to handle cases where you don’t know how long the process will run. Just add a check to watch for the process running too long and break into some kind of handler when that happens.
I’m a professional programmer, I deal with this kind of thing all the time. I’ve literally written applications using LLMs that do this.
Yea, well…for the heavy lifting it could be nice. But I’m not letting AI build my house.
Lifting heay crap to the roof or something like that? Sure. That is what machines are good at.
Welding? Well welding robots have existed for long time, they just need to programmed perfectly. I’ve worked with a couple of them, the results are not always consistent and they required some quality checks. It is easier. Manual welding takes more skill and takes longer. I just don’t need the AI part though. That makes it unpredictable. And if I let a robot do something, it should be predictable.
I do woodworking and fixing around the house. Even when building new stuff, there are always issues that you have to solve on the spot. Walls that are not straight, angles that are not perfect, spaces you cannot reach et cetera.
As long as AI does not get it 100% right every time it is not touching my house. And yes, a professional doesn’t reach that rate either, but at least they know and doublecheck themselves and know how to fix things.
AI can also know to doublecheck themselves and how to fix things.
Well, why did it not do it right the first time then? If the doublecheck gives a different result, then which is the right result? If I can ask the same question twice and I get two different answers, how I or the machine known which is the right answer? And if the machine knows, then why would it need to doublecheck? A machine can do it right the first time if it knows how, right?
You said:
Well, why didn’t the human professional not do it right the first time then? If it’s okay for a human professional to make mistakes because they can double check and fix their mistakes, why is not okay for machines to do likewise?
Because a machine is expected to do it right the first time. Because it’s supposed to do the exact same thing everytime with the exact same input parameters. If you give it the exact same input every time and you get a different result every time it is not reliable to function as automation.
Humans are just that. Humans. They make mistakes sometimes. The reason humans can keep doing the work is that there is no better alternative. Machines can’t do it, so who else is gonna do it? Either humans build your house or nobody does. There is little choice there.
So if a machine is to take over that job, it better do it right and reliable and cheaper.Because humans can already do it right and reliable. And there’s little money saving if a human still needs to check all the work.
No, it’s not. And it doesn’t have to because as I pointed out it can check its work.
You’ve got a mistaken impression of how AI works, and how machines in general work. They can make mistakes and can recognize and correct those mistakes. I’m a programmer, I have plenty of first hand experience. I’ve written code that does it myself.
Yes, that’s the plan.
(Read: “I don’t actually understand how ML works”)
It’s not AI. Stop calling it AI.
The term “artificial intelligence” has been in use since the 1950s and it encompasses a wide range of fields in computer science. Machine learning is most definitely included under that umbrella.
Why do you think an AI can’t double check things and fix them when it notices problems? It’s a fairly straightforward process.
The halting problem. Machines cannot, by logic, double check themselves.
What are you trying to argue, that humans aren’t Turing-complete? Which would be an insane self-own. That we can decide the undecidable? That would prove you don’t know what you’re talking about, it’s called undecidable for a reason. Deciding an undecidable problem makes as much sense as a barber who shaves everyone who doesn’t shave themselves.
Aside from that why would you assume that checking results would, in general, involve solving the halting problem.
It has nothing to do with whether humans are Turing complete or not. No Turing machine is capable of solving an undecidable. But humans can solve undecidables. Machines cannot solve the problem the way a human would. So, no, humans are not machines.
This by definition limits the autonomy a machine can achieve. A human can predict when a task will cause a logic halt and prepare or adapt accordingly, a machine can’t. Unless intentionally limited by a programmer to stop being Turing complete and account for the undecidables before hand (thus with the help of the human). This is why machines suck at unpredictable or ambiguous task that humans fulfill effortlessly on the daily.
This is why a machine that adapts to the real world is so hard to make. This is why autonomous cars can only drive in pristine weather, on detailed premapped roads with really high maintenance, with a vast array of sensors. This is why robot factories are extremely controlled and regulated environments. This is why you have to rescue your roomba regularly. Operating on the biggest undecidable there is (e.g. future parameters of operations) is the biggest yet unsolved technological problem (next to sensor integration on world parametrization and modeling). Machine learning is a step towards it, in a several thousand miles long road yet to be traversed.
No, we can’t. Or, more precisely said: There is no version of your assertion which would be compatible with cause and effect, would be compatible with physics as we understand it.
Don’t blame me I didn’t do it. The universe just is that way.
Yet we live in a world where millions of humans assert their will over undecidables every day. Because we can make irrational decisions, logic be damned. Explain that one.
The halting problem is an abstract mathematical issue, in actual real-world scenarios it’s trivial to handle cases where you don’t know how long the process will run. Just add a check to watch for the process running too long and break into some kind of handler when that happens.
I’m a professional programmer, I deal with this kind of thing all the time. I’ve literally written applications using LLMs that do this.