Posts: 7392
Threads: 53
Joined: January 15, 2015
Reputation:
88
RE: The future of AI?
March 28, 2016 at 12:21 pm
(This post was last modified: March 28, 2016 at 12:23 pm by I_am_not_mafia.)
(March 28, 2016 at 12:08 pm)IATIA Wrote: (March 28, 2016 at 7:31 am)Mathilda Wrote: Or drowning in a very cold lake? Or has its wrists slit in a very hot bath?
A good programmer can overcome all these difficulties and specifically addressing the above quote, if the robot is doing it's job, the above scenarios should not happen in the first place. And ultimately, in the end, shit just happens.
In which case it's not AI if you're having to rely on the programmer to specify everything. That's the point I am trying to demonstrate. Intelligence is the ability to adapt to an unknown environment, otherwise you might as well use a lookup table. But how do we design desirable ways to adapt if we ourselves cannot know every environment that the agent could find itself in? And there are too many to specify. Whatever situation you specifically handle there will be a myriad of exceptions. This in a nutshell is why the top down approach does not work and why AI stagnated for 30 years. This is why we use the bottom up approach and start with something simple situated in an environment letting it discover itself how to adapt. But what this means is that you cannot then apply Isaac Asimov's top down rules.
Posts: 18510
Threads: 129
Joined: January 19, 2014
Reputation:
90
RE: The future of AI?
March 28, 2016 at 12:27 pm
(March 28, 2016 at 12:57 am)IATIA Wrote: (March 28, 2016 at 12:29 am)Alex K Wrote: You'll have to be more specific than that it runs "on an interrupt system". How does the system notice that the AI intends to hurt humans?
Microprocessors have an interrupt input that will force it to jump to a subroutine. This subroutine would check for interaction with a human. It would be as simple as having a clock circuit that would send an interrupt every 100ms or whatever is deemed appropriate. I take it that you are not involved with microprocessors at the machine level.
I've written Assembly. Have you ever trained an AI? My point was that controlling the "thoughts" of a complex AI is super complicated.
What you want to do is basically put a completely independent pattern recognition on top of the AI that stops the machine in crucial situations. That might work for scenarios with few degrees of freedom, but if I project a couple of decades into the future, with AI freely interacting with the world e.g. as mobile robots, you will have to know the intentions of the AI, a blunt pattern recognition as in "don't stab the human shaped thing" won't do.
The fool hath said in his heart, There is a God. They are corrupt, they have done abominable works, there is none that doeth good.
Psalm 14, KJV revised edition
Posts: 7392
Threads: 53
Joined: January 15, 2015
Reputation:
88
RE: The future of AI?
March 28, 2016 at 12:38 pm
Goal: sterilise room.
ERROR: HUMAN SHAPED THING IN ROOM.
Action: Activate chainsaw mode.
Sitrep: No human shape in room
Action: Apply acid.
Posts: 10769
Threads: 15
Joined: September 9, 2011
Reputation:
118
RE: The future of AI?
March 28, 2016 at 12:50 pm
Here's where we're at now:
http://www.livescience.com/42561-superco...ivity.html
The article refers to a supercomputer taking 40 minutes to model one second of human brain activity. Let's oversimplify, and say that means a human brain is 1600 times more 'powerful' than the supercomputer. The 'power' of a supercomputer will have to doubled about 10 or 11 times for it to be 'powerful' enough to simulate human brain activity in real time. If, and I concede that's a big if, Moore's Law in its general sense holds for another 17 years; that's when we can expect a super computer to be capable of that. And it (if ML holds) be compressed to desktop within another 10 years, So, if everything holds (which it sooner or later won't), roughly speaking, desktop AI as 'powerful' as a human brain in about 20 years. I'll be in my mid-seventies by then if I make it that long, but there's a chance I can have a very good simulation of a person for company then.
I'm not anti-Christian. I'm anti-stupid.
Posts: 4196
Threads: 60
Joined: September 8, 2011
Reputation:
30
RE: The future of AI?
March 28, 2016 at 12:51 pm
(This post was last modified: March 28, 2016 at 12:55 pm by IATIA.)
(March 28, 2016 at 12:27 pm)Alex K Wrote: I've written Assembly. Have you ever trained an AI? My point was that controlling the "thoughts" of a complex AI is super complicated.
What you want to do is basically put a completely independent pattern recognition on top of the AI that stops the machine in crucial situations. That might work for scenarios with few degrees of freedom, but if I project a couple of decades into the future, with AI freely interacting with the world e.g. as mobile robots, you will have to know the intentions of the AI, a blunt pattern recognition as in "don't stab the human shaped thing" won't do.
I have never been trained in AI programming, though I had messed with some preliminary programming on my own back in the days of BBs.
I never insinuated that it would be easy, only doable, ergo the reason for starting now, after all, it is only a machine. Even if it became sentient, the watchdog timer would still override the system, which means that, technically, it would never have 'free will'.
For the machine to even function, let us say sweeping the floor, there has to be some pattern recognition to identify the floor, broom, walls, etc.. There then needs to be a subroutine for the action. Basic functions such as these do not even require AI per se. Where the AI would come in is the actual navigation of the room and the difficulties of sweeping in corners, under the table and such. Basically, I would think the AI itself would utilize various subroutines in all aspects of action. There is no reason to think that we would not know the 'thoughts' of the machine at anytime. It is a machine that, regardless of even sentience, is still only running a program.
I have done generous amounts of programming, back in the day, on mainly the M6809 and M68K and setting up monitoring, logging, break points etc, were sometimes larger than the program I was troubleshooting, but I could 'see' exactly what was happening at any moment.
You make people miserable and there's nothing they can do about it, just like god.
-- Homer Simpson
God has no place within these walls, just as facts have no place within organized religion.
-- Superintendent Chalmers
Science is like a blabbermouth who ruins a movie by telling you how it ends. There are some things we don't want to know. Important things.
-- Ned Flanders
Once something's been approved by the government, it's no longer immoral.
-- The Rev Lovejoy
Posts: 7392
Threads: 53
Joined: January 15, 2015
Reputation:
88
RE: The future of AI?
March 28, 2016 at 1:04 pm
(March 28, 2016 at 12:50 pm)Mister Agenda Wrote: Here's where we're at now:
http://www.livescience.com/42561-superco...ivity.html
The article refers to a supercomputer taking 40 minutes to model one second of human brain activity. Let's oversimplify, and say that means a human brain is 1600 times more 'powerful' than the supercomputer. The 'power' of a supercomputer will have to doubled about 10 or 11 times for it to be 'powerful' enough to simulate human brain activity in real time. If, and I concede that's a big if, Moore's Law in its general sense holds for another 17 years; that's when we can expect a super computer to be capable of that. And it (if ML holds) be compressed to desktop within another 10 years, So, if everything holds (which it sooner or later won't), roughly speaking, desktop AI as 'powerful' as a human brain in about 20 years. I'll be in my mid-seventies by then if I make it that long, but there's a chance I can have a very good simulation of a person for company then.
And that's not even taking into account the many orders of magnitude more processing power required for artificial evolution, or whatever form of parameter optimisation you want to use if you want to create your own version.
Also that model of a human brain would be using simplistic models of neurons. It has to because we can't measure a whole brain that extensively to know the location and state of every thing that can possibly compute in the brain.
And if you somehow did all that you'd just end up with a digital version of a real brain. We'd still need to try to figure out how it works. Even evolving a simple neural network or circuit diagram can take several months to figure out what's happening. I remember spending two weeks crippling a simple three layer biologically plausible neural network and wondering why it kept working, albeit at a lesser performance.
Posts: 7392
Threads: 53
Joined: January 15, 2015
Reputation:
88
RE: The future of AI?
March 28, 2016 at 1:10 pm
Patch Notes 1.0.1
Do not use Chainsaw mode on human shaped object
Goal: sterilise room.
ERROR: HUMAN SHAPED THING IN ROOM.
Action: Switch off light.
Sitrep: No human shape sensed in room
Action: Apply acid.
Posts: 18510
Threads: 129
Joined: January 19, 2014
Reputation:
90
RE: The future of AI?
March 28, 2016 at 1:22 pm
(March 28, 2016 at 12:51 pm)IATIA Wrote: (March 28, 2016 at 12:27 pm)Alex K Wrote: I've written Assembly. Have you ever trained an AI? My point was that controlling the "thoughts" of a complex AI is super complicated.
What you want to do is basically put a completely independent pattern recognition on top of the AI that stops the machine in crucial situations. That might work for scenarios with few degrees of freedom, but if I project a couple of decades into the future, with AI freely interacting with the world e.g. as mobile robots, you will have to know the intentions of the AI, a blunt pattern recognition as in "don't stab the human shaped thing" won't do.
I have never been trained in AI programming, though I had messed with some preliminary programming on my own back in the days of BBs.
I never insinuated that it would be easy, only doable, ergo the reason for starting now, after all, it is only a machine. Even if it became sentient, the watchdog timer would still override the system, which means that, technically, it would never have 'free will'.
For the machine to even function, let us say sweeping the floor, there has to be some pattern recognition to identify the floor, broom, walls, etc.. There then needs to be a subroutine for the action. Basic functions such as these do not even require AI per se. Where the AI would come in is the actual navigation of the room and the difficulties of sweeping in corners, under the table and such. Basically, I would think the AI itself would utilize various subroutines in all aspects of action. There is no reason to think that we would not know the 'thoughts' of the machine at anytime. It is a machine that, regardless of even sentience, is still only running a program.
Sure, even a complex neural net is "only a program", too, in the sense that a human is only a bunch of atoms interacting. But if it is learning, we do not simply understand afterwards how it does what it does. I still think what you propose only works in simple applications.
The fool hath said in his heart, There is a God. They are corrupt, they have done abominable works, there is none that doeth good.
Psalm 14, KJV revised edition
Posts: 1164
Threads: 7
Joined: January 1, 2014
Reputation:
23
RE: The future of AI?
March 28, 2016 at 1:29 pm
(This post was last modified: March 28, 2016 at 1:34 pm by JuliaL.)
I'm not real worried in the near term as 'bots are still having trouble getting by capchas.
Regarding their ultimate triumph:
I still don't see a problem because the doom sayers all seem to think there is some object called "problem solving" or "intelligence" which can be encapsulated and incorporated as a machine function.
Intelligence is contextual to and interactive with the environment a machine intelligence would inhabit. As such, there is no single underlying algorithm to discover.
Some worry about AIs intentionally, or accidentally displacing humanity. I suspect that if they were given motivation (for instance to seek novelty) they would be more likely to get bored on this little planet and go out investigating the universe. This is something they would be better suited for than we are as they would be able to back themselves up, suspend operations and re-awaken once stellar distances had been traversed. The fact that we don't see this all around is evidence that technological civilizations don't survive long enough for it to happen. Shit happens, in our case climate change.
I did like the idea that the first successful imitation game winner would be a non-player character in a massively multi-player on-line role playing game. It's a little saddening, and chilling that Microsoft might be getting there first.
So how, exactly, does God know that She's NOT a brain in a vat?
Posts: 1164
Threads: 7
Joined: January 1, 2014
Reputation:
23
RE: The future of AI?
March 28, 2016 at 1:31 pm
(This post was last modified: March 28, 2016 at 1:36 pm by JuliaL.)
Oops, double post, sorry.
So how, exactly, does God know that She's NOT a brain in a vat?
|