Our server costs ~$56 per month to run. Please consider donating or becoming a Patron to help keep the site running. Help us gain new members by following us on Twitter and liking our page on Facebook!
Current time: December 1, 2024, 7:35 am

Poll: Artificial Intelligence: Good or Bad?
This poll is closed.
Good
50.00%
4 50.00%
Bad
50.00%
4 50.00%
Total 8 vote(s) 100%
* You voted for this item. [Show Results]

Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Artificial Intelligence
#31
RE: Artificial Intelligence
(July 14, 2015 at 9:26 am)Rhythm Wrote: Human intelligence doesn't -require- the scale of architecture we're born with, we can do a great deal with much less (and a great deal less than that would satisfy any useful definition of intelligence).  We -know- that biological implementations are inefficient, wasteful, and relatively slow.   

We don't know that.

In terms of wattage the processing power of the brain is extremely efficient compared to an equivalent super computer which will require megawatts of electricity instead of about a hundred watts.

The size of our brains comes at quite an evolutionary cost in terms of difficulty of childbirth and an extremely high metabolic requirement. If we couldn't make use of our brains or didn't need them then evolution would have selected for smaller brains rather than larger ones.

It's true that you can achieve a lot with very little. But if you want intelligence to scale and to be able to adapt to a wider variety of environments then you need more.


(July 14, 2015 at 9:26 am)Rhythm Wrote: "Human AI" might not be as complicated as we give it, and it's difficult to establish that one type of intelligence is "trickery, smart computing" while the other is the real deal

There are many different definitions of intelligence, some more useful than others. My own working definition is that it allows adaptation to an unknown environment. I believe that a formal definition is possible based on non-equilibrium thermodynamics and I am currently developing models to try and demonstrate this.

Most so called AI is trickery because it tries to simulate the effect rather than to have it arise endogenously. It's like creating a function and labelling it "anger" and writing it to produce a  sudden arm movement in a robot if it senses loud noise. That's not anger, that's a function that produces a sudden arm movement if it senses a loud noise. It's us anthropomorphising it that sees it producing anger. But it doesn't scale. What about hitting the robot with a hammer? That should produce anger as well.

My favourite paper in AI discusses this Artificial Intelligence meets natural stupidity


(July 14, 2015 at 9:57 pm)bennyboy Wrote: If we're going to have no true Scotsman, and say that only human beings have big-I "Intelligence," because intelligence means being human, then okay. 

I never said that. Animals are intelligent as well. What I said is that the agent has to be situated within the same environment that they are being intelligent about. The whole field of New A.I and Artificial Life is based on this premise. This can be a virtual environment, or a completely abstract one, but they need to be a part of it, sensing it and acting within it and changing it for it to be intelligent. Only in this way will anything actually mean anything to an AI, otherwise it's like trying to describe red to a blind man.


(July 14, 2015 at 9:57 pm)bennyboy Wrote: But that doesn't change the fact that computers already play chess better than people, have superior facial and pattern recognition skills, etc. right now, today.  The google translate program for sure knows more about world languages than humans do.  And all these skills are adaptible.

You say this because you don't really know how these programs work, you just see what they can do and compare them to what a human can do. Using your reasoning I could argue that a telephone directory is more intelligent than a human because it knows more telephone numbers.

Chess playing computers, facial recognition and google translate are not actually adaptable. Google do use deep learning techniques for their machine translation but they use them offline and could just as easily use other statistical methods. A chess program only works because the problem domain is highly constricted and can be exhaustively searched. These techniques are exhaustively mapping out a space in the same way that a telephone directory lists out all the telephone numbers. Chess is far easier for a computer program than Go for example which is played on a much larger board and requires both chess like abilities and pattern recognition of large areas of territory. But still, neither are comparable to a human even if they outperform him or her. Why? Change the rules a bit or make it 3D chess and the player will be able to relearn and adapt whereas the computer will just fail without reprogramming.

In other words, these smart programs don't scale. This has been the bane of Artificial Intelligence since its inception and it's why GOFAI (Good old Fashioned AI) or classical AI failed.

If you actually try to create truely scalable AI that adapts regardless of the environment without being told what to do, then you'll find that it is insanely difficult. Simple things that we can easily write a conventional computer program to do are just not yet possible with a self organising system (e.g. plan out a sequence of actions). But that computer program will quickly fail in a noisy real world environment if it is not heavily constrained and you do not take into account every eventuality.
Reply
#32
RE: Artificial Intelligence
(July 16, 2015 at 2:15 pm)davidMC1982 Wrote: There are areas where human intelligence is vastly superior to AI; recognising people we know being one. We can recognise people from images of their face from most angles. We can recognise people from their gait. I would imagine AI has the advantage when it comes to recognising potential facial matches within a crowd, but I'm fairly sure that we have a higher success rate, given enough time. Generally speaking, object recognition is something we're much better than AI at. However, there's no reason to believe this will always be the case. The fact that we can already implement some rudimentary AI equivalent of everything we do is indicative of that.

Exactly. I can hold up an apple and you will recognise it as an apple if I move towards you, further away, up, sideways, rotate it, partially cover it, dim the lights, change its colour or throw it to you. And this happens really fast. There is a reason that a very large proportion of our brains is devoted to visual processing. We're never aware of the edge detection and pattern matching going on inside our heads, we just see an apple.

(July 16, 2015 at 2:15 pm)davidMC1982 Wrote: The wider understanding of intelligence incorporates learning, decision making, desires, fears and goals amongst other things. Again, there's no reason to think these couldn't be artificially generated. After all, we can trace our roots back to simple bacteria (or further to basic chemistry) which, with a few simple rules and many iterations, came to form us. There's no doubting that iterating generations in a synthetic form will be faster than in a biological one and hence, given the necessary starting conditions, an ability to replicate, and enough randomness, would result in an intelligence like our own.

Strong AI is definitely possible in theory. It's just whether it's actually practical to do so.
Reply
#33
RE: Artificial Intelligence
(July 17, 2015 at 2:47 pm)I_am_not_mafia Wrote: We don't know that.

In terms of wattage the processing power of the brain is extremely efficient compared to an equivalent super computer which will require megawatts of electricity instead of about a hundred watts.
Yes, we do.  If you -purpose built- a human being, you would leave out vestigials, and you would scale the human being appropriately - you might rearrange few bits while you were at it.  Because we are -evolved-...rather than designed, no such consideration was made.  That it takes longer to evolve a brain that it does to build a better chipset is -fairly- well evidenced.......

Unfortunately, we can only side in favor of our brains by neglecting to consider how that power is provided.  Not that the efficiency of the brain -itself-, in a vacuum, has escaped you lot, efficient cpus are a definite thing and people have long been looking to nueral architecture for ideas there.    Nevertheless, energy must be supplied (and in the case of our brains a fairly robust amount of chemical inputs in addition - though we could boil that back down to energy as well, sure), and living organisms have shit conversion....just atrocious.  That's why we use machines in the first place. More work over less time for a smaller amount of energy put in. Imagine how many bowls of rice it would take, for example, if the internet where a room full of chinese people doing all of this on pen and paper (and supposing we could actually get that much work out of them...imagine how shitty an internet it would be..lol). In my own life, imagine how much time and energy it would take to let the plants do their own breeding and cultivation? No, to hell with that - we apply machine work to shore up a drag assing biological implementation.

Perhaps, though, we simply use different measures of efficiency, etc?

Quote:The size of our brains comes at quite an evolutionary cost in terms of difficulty of childbirth and an extremely high metabolic requirement. If we couldn't make use of our brains or didn't need them then evolution would have selected for smaller brains rather than larger ones.

It's true that you can achieve a lot with very little. But if you want intelligence to scale and to be able to adapt to a wider variety of environments then you need more.
Sure, "more" in the general, but that "more" may be less than we have.  

NS won't actually weed out a big brain just because it isn't fully leveraged.  It would only weed out a big brain (in favor of a smaller one, for example) if that big brain was failing to deliver.  Directed, human engineering does that...reduces the scale of something along known metrics to increase it's efficiency regardless of the sufficiency of it's performance envelope, NS is -incapable- of doing that.  So long as the big brained monkey keeps having big brained babies, NS isn't going to do anything about that at all just because it's oversized or an innefficient (or egregious) use of resources by some other measure....it doesn't know about -any- measure. It makes no considerations, it's improving upon nothing. We are simply what remains. There's no reason to conclude that evolution would have selected for smaller brains just because they're easer to build, and we don't use our whole brain. We -don't- use our entire brain, in that sense....and evolution seems to have selected -for- our bigger brains. Clearly, the situation is a little more complicated than "big, not fully leveraged brain bad - small, fully leveraged brain good".

Quote:There are many different definitions of intelligence, some more useful than others. My own working definition is that it allows adaptation to an unknown environment. I believe that a formal definition is possible based on non-equilibrium thermodynamics and I am currently developing models to try and demonstrate this.

Most so called AI is trickery because it tries to simulate the effect rather than to have it arise endogenously. It's like creating a function and labelling it "anger" and writing it to produce a  sudden arm movement in a robot if it senses loud noise. That's not anger, that's a function that produces a sudden arm movement if it senses a loud noise. It's us anthropomorphising it that sees it producing anger. But it doesn't scale. What about hitting the robot with a hammer? That should produce anger as well.

My favourite paper in AI discusses this Artificial Intelligence meets natural stupidity
Until you can establish that all of this doesn't -also- describe our own native system...........it's difficult to see the line as anything but arbitrary.  Why does hitting my brother on the arm produce anger in him, but not in me? Didn't evolution create a function which we've labeled "anger" in ourselves? What should or shouldn't produce "anger", in your example, is a simple list of conditions....and it's difficult to see why that would be hard to scale. There's no point in criticisizing the anthropomorphic urges of others if you're going to follow it up, in the same breath, with an anthropomorphic assertion like "That's not anger"......you just said that it was......you mean it isn't "human anger" - but even amongst human beings "anger" is amorphous (que the difference between what angers me and what angers my brother). So, agreed - that's not human anger. Whats the problem? We're talking about creating artificial intelligence, not artificial humans, right? Is there some requirement that intelligence be human in order to avoid being a "trick"? Where does that leave all of the other examples of intelligence in our world? Is it all trickery and anthropomophism - and again why is our own model receiving such preferential treatment - how has it escaped the axe with which you hope to chop up the robot?

In any case, on the one hand I don't think that AI built to "model anger" is the best representative of AI - but I can see why it could be. Similarly, I don't think that the turing test took us down the right road - might have wasted alot of time, even though we learned plenty about those "tricks" chasing it. I suppose I could sum the majority of my comments up as so: I'm not disputing your understanding of -how- the machines we point to as examples of potential AI achieve their particular feat, it's a hobby of mine, and so I'm aware of how we model these things (and ways we -could- model these things) down to the level of hardware - though, my programming is shit.......- I'm wondering how you've determined that this is fundamentally different from how -you or I or a bullfrog- achieve that same effect. What do you know, that could justify such a line in the sand as to call "anger" a trick when one example achieves the effect, but the real deal when the other achieves the -same- effect? It's not your comp sci I'm picking a bone with, it's your biology.
I am the Infantry. I am my country’s strength in war, her deterrent in peace. I am the heart of the fight… wherever, whenever. I carry America’s faith and honor against her enemies. I am the Queen of Battle. I am what my country expects me to be, the best trained Soldier in the world. In the race for victory, I am swift, determined, and courageous, armed with a fierce will to win. Never will I fail my country’s trust. Always I fight on…through the foe, to the objective, to triumph overall. If necessary, I will fight to my death. By my steadfast courage, I have won more than 200 years of freedom. I yield not to weakness, to hunger, to cowardice, to fatigue, to superior odds, For I am mentally tough, physically strong, and morally straight. I forsake not, my country, my mission, my comrades, my sacred duty. I am relentless. I am always there, now and forever. I AM THE INFANTRY! FOLLOW ME!
Reply
#34
RE: Artificial Intelligence
Sorry I missed your reply. I'll split it over two posts, one for each subject.


(July 14, 2015 at 9:26 am)Rhythm Wrote: Human intelligence doesn't -require- the scale of architecture we're born with, we can do a great deal with much less (and a great deal less than that would satisfy any useful definition of intelligence).  We -know- that biological implementations are inefficient, wasteful, and relatively slow.   

(July 17, 2015 at 3:16 pm)Rhythm Wrote:
(July 17, 2015 at 2:47 pm)I_am_not_mafia Wrote: We don't know that.

In terms of wattage the processing power of the brain is extremely efficient compared to an equivalent super computer which will require megawatts of electricity instead of about a hundred watts.
Yes, we do.  If you -purpose built- a human being, you would leave out vestigials, and you would scale the human being appropriately - you might rearrange few bits while you were at it.  Because we are -evolved-...rather than designed, no such consideration was made.  That it takes longer to evolve a brain that it does to build a better chipset is -fairly- well evidenced.......

One problem here is the equivocation about the term 'efficiency'. One of the first things drilled into my head when I first started studying computer science is that you only ever use the term efficient when describing something else. Efficient in terms of speed? Memory usage? Power consumption?

When writing a simple computer program you need to prioritise how it will be efficient. Take for example a program that needs to return a random prime number from a pool of 50 million of them. If you are prioritising speed (efficient over time) then you would just calculate them all and store them in memory. Say memory was the most important resource and you had lots of time to spare, you could just pick a random number between 1 and 50 million and calculate prime numbers until you reached that number of primes.

I get your point about evolution. There is always 'stuff' left over that's considered junk, except when understood within a wider context it never is. So take 'junk DNA'. Forget for the moment that people dispute whether or not it is actually junk, at the very least it widens the search space during an evolutionary run (see the Neutral gene theory for this).

Yes of course the brain can be made more efficient for the purpose of what it currently does, but there are two things to remember. This is not the same as being "inefficient, wasteful, and relatively slow" (relatively slow compared to what?). Secondly if you could hack the brain and make it more efficient then you will might well lose genericity. If there is a cost then evolution re-uses what it already has or gets rid of it. Take glia cells for example. They provide structure and maintain neurons, but for the last few decades you get these cycles in neuroscience where people theorise whether they are capable of processing themselves. Just because they provide structure to the brain does not mean that they cannot perform other functions as well.



(July 14, 2015 at 9:26 am)Rhythm Wrote:  Nevertheless, energy must be supplied (and in the case of our brains a fairly robust amount of chemical inputs in addition - though we could boil that back down to energy as well, sure), and living organisms have shit conversion....just atrocious.

(July 14, 2015 at 9:26 am)Rhythm Wrote:  That's why we use machines in the first place. More work over less time for a smaller amount of energy put in.  Imagine how many bowls of rice it would take, for example, if the internet where a room full of chinese people doing all of this on pen and paper (and supposing we could actually get that much work out of them.

But you aren't comparing like for like. Those Chinese workers are capable of performing far more than the calculations that you can run on your computer. Whereas if we took a single brain, broke it apart and could engineer all those neurons and axons to perform a specific functions with the same ease that we can write computer programs then there would be enough material to work with and it would be more power efficient. A single live neuron has more computational power than a whole classical artificial neural network.

Generally what you find again and again in Artificial Intelligence is that you can make some real serious progress with very little effort, but getting it to scale becomes exponentially harder the more generic you want it. This is because the situations any AI has to deal with increases exponentially and this is also why the field of classical AI failed.

Your point about energy conversion applies to other methods of generating electricity. How much wind power is lost when using turbines? How much heat is wasted when burning coal? Nuclear power is probably the most efficient I suppose but now we're getting off subject. This is a thread about artificial general intelligence and therefore by definition is about how generalisable an adaptive system is.


(July 14, 2015 at 9:26 am)Rhythm Wrote: NS won't actually weed out a big brain just because it isn't fully leveraged.  It would only weed out a big brain (in favor of a smaller one, for example) if that big brain was failing to deliver.

Also because there is a cost to having a big brain. Greater metabolic cost. Higher mother mortality rate for example. And there is an evolutionary advantage to making better use of what you have. Because there is no end point with evolution there is no such thing as an optimal solution, but I disagree that we know that the brain is inefficient. We may of course suspect that it is, and it may indeed be so, but we don't actually know until we have come up with a better solution ourselves. And that is actually what I am trying to do myself. In my own research I am trying to produce a wholly new architecture that is an alternative to using biologically plausible neural networks. I hoped to get some kind of efficiency boost somehow, but the main reason was because the main thing limiting progress is our understanding of how to engineer neural networks. As it turns out I've made efficiencies in terms of memory. I haven't yet compared how they compare in terms of speed though.
Reply
#35
RE: Artificial Intelligence
(July 14, 2015 at 9:26 am)Rhythm Wrote:
Quote:Most so called AI is trickery because it tries to simulate the effect rather than to have it arise endogenously. It's like creating a function and labelling it "anger" and writing it to produce a  sudden arm movement in a robot if it senses loud noise. That's not anger, that's a function that produces a sudden arm movement if it senses a loud noise. It's us anthropomorphising it that sees it producing anger. But it doesn't scale. What about hitting the robot with a hammer? That should produce anger as well.
Until you can establish that all of this doesn't -also- describe our own native system...........it's difficult to see the line as anything but arbitrary.  Why does hitting my brother on the arm produce anger in him, but not in me?  Didn't evolution create a function which we've labeled "anger" in ourselves?  What should or shouldn't produce "anger", in your example, is a simple list of conditions....and it's difficult to see why that would be hard to scale.  There's no point in criticisizing the anthropomorphic urges of others if you're going to follow it up, in the same breath, with an anthropomorphic assertion like "That's not anger"......you just said that it was......you mean it isn't "human anger" - but even amongst human beings "anger" is amorphous (que the difference between what angers me and what angers my brother).  So, agreed - that's not human anger.  Whats the problem?  We're talking about creating artificial intelligence, not artificial humans, right?  Is there some requirement that intelligence be human in order to avoid being a "trick"?  Where does that leave all of the other examples of intelligence in our world?  Is it all trickery and anthropomophism - and again why is our own model receiving such preferential treatment - how has it escaped the axe with which you hope to chop up the robot?

In any case, on the one hand I don't think that AI built to "model anger" is the best representative of AI - but I can see why it could be.  Similarly, I don't think that the turing test took us down the right road - might have wasted alot of time, even though we learned plenty about those "tricks" chasing it.  I suppose I could sum the majority of my comments up as so:  I'm not disputing your understanding of -how- the machines we point to as examples of potential AI achieve their particular feat, it's a hobby of mine, and so I'm aware of how we model these things (and ways we -could- model these things) down to the level of hardware - though, my programming is shit.......- I'm wondering how you've determined that this is fundamentally different from how -you or I or a bullfrog- achieve that same effect.  What do you know, that could justify such a line in the sand as to call "anger" a trick when one example achieves the effect, but the real deal when the other achieves the -same- effect?  It's not your comp sci I'm picking a bone with, it's your biology.

Because these researchers are using top down approach to modelling an emergent phenomenon when a bottom up approach is required. The scenario about a robot with a function called anger causing it to hit something when activated is one I made up, but typical of what I have observed. This would be a top down approach because some researchers saw that it was a common occurrence in natural agents that certain situations and tried to model the emergent expression of an underlying mechanism. This approach does not scale because anger can be expressed in a myriad of different ways, but these researchers will still get interest from the media and use it to get more funding. This is what I mean by trickery.

What they should have done was to understand the function of anger. And there is a function within the agent otherwise anger would not have evolved. Using a bottom up approach you would argue that a natural agent lashing out is actually just emergent phenomenon of set of an underlying mechanisms. Based on observational science, you could theorise for example a neuro-modulator increasing the excitability of neurons within a part of the brain, overriding conscious control and causing it move out of a stable state to reduce some particular sensory stimuli or memory and better its own situation. That may result in it lashing out, or going off and working harder for example or cogitating about different ways to adapt. If you then created a robot that had an equivalent mechanism  to agitate itself and better its situation based on similar stimuli, then you could argue that it was expressing anger even if the same functionality was implemented in a completely different way.  The former is trickery, the latter is strong AI. You did not tell the robot how it should behave, you created it to adapt by itself and have no a-priori idea how it will behave. Unfortunately the latter is also significantly more difficult.
Reply
#36
RE: Artificial Intelligence
IANM, what do you actually do and where does your expertise on the subject come from?
Reply
#37
RE: Artificial Intelligence
(July 22, 2015 at 2:20 pm)excitedpenguin Wrote: IANM, what do you actually do and where does your expertise on the subject come from?

BSc in computer science, MSc and PhD in biologically inspired AI, two post-docs, experience of R&D in industry and I do research in my spare time. I don't always manage to find work in AI or R&D but this is what I specialise in.
Reply
#38
RE: Artificial Intelligence
(July 22, 2015 at 3:35 pm)I_am_not_mafia Wrote:
(July 22, 2015 at 2:20 pm)excitedpenguin Wrote: IANM, what do you actually do and where does your expertise on the subject come from?

BSc in computer science, MSc and PhD in biologically inspired AI, two post-docs, experience of R&D in industry and I do research in my spare time. I don't always manage to find work in AI or R&D but this is what I specialise in.

You should do an AMA, in the intro forums. I am very intrigued by you.
Reply
#39
RE: Artificial Intelligence
(July 22, 2015 at 3:38 pm)excitedpenguin Wrote:
(July 22, 2015 at 3:35 pm)I_am_not_mafia Wrote: BSc in computer science, MSc and PhD in biologically inspired AI, two post-docs, experience of R&D in industry and I do research in my spare time. I don't always manage to find work in AI or R&D but this is what I specialise in.

You should do an AMA, in the intro forums. I am very intrigued by you.

Already done it Smile

http://atheistforums.org/thread-33557.html

I introduced myself in this as having one post-doc because my second post doc was in another area of Computer science rather than AI specifically. I'm currently working as a software engineer to refactor the code written by biologists. To be honest, it's kind of put me off Biology. The contract only lasts another year though and it's a nice working environment.
Reply
#40
RE: Artificial Intelligence


I dumped the above response to your previous posts in tags because it;s a bit of a tangent - and doesn't really require us to see eye to eye to discuss AI, or it's feasibility.

Let me see if I can rephrase the point I'm trying to express, based in good part on agreement rather than disagreement with what I understand to be your position.

-We need more processing power than we currently possess to achieve the sort of AI you're discussing, and moores law will only get us so far-. This is a fair summary of your main point, which we began discussing as it relates to AI, yes?

:Agreed entirely: One of the areas that biology, imo, has had a leg up in this regard is scale, brains are dense boards (I think I can float this one between you and I without too much trouble). Moores law may not yield an architecture of commensurate scale, density, or "count" (again, floating it between us) within a reasonable timeframe, if ever. However, we -know- for a fact that we don't need a 1 for 1 measure (nueral to digital, just as one example), because human beings can function, a human level of intelligence can manifest itself in us, with a -vast- reduction in the scale of our brains, the density of our "boards". The total count of nuerons. That means moores doesn't have to give us an architecture commensurate with a human brain to achieve a human level of intelligence, assuming there's a relationship here. I'm accepting the bar, in a hypothetical AI scenario, whereby we create a machine as a sort of mechanical simulation of a human brain, and accepting in that scenario that if we were to do so it could possess intelligence commensurate -to- a human being. I'm suggesting that whatever that number "x" is, it's a hell of alot lower than the total number of "x" we currently possess in our brains.

Does that help to clarify?

Regarding the shortfall between what we now possess in computing and that number "x", I couldn't help but see a big source of as yet untapped power on the chem side of our electrochem brains (continuing with the theme of the human brain and life in general as a rough working model for how we might create a machine intelligence). Even in the absence of neural architectures chemical computing appears to be able to provide effects generally considered to be indicative of consciousness and intelligence, though nowhere near what we present in it's totality, obvs. Que the masters of organic chemistry with half a billion years of dev -already- behind them, eh?
I am the Infantry. I am my country’s strength in war, her deterrent in peace. I am the heart of the fight… wherever, whenever. I carry America’s faith and honor against her enemies. I am the Queen of Battle. I am what my country expects me to be, the best trained Soldier in the world. In the race for victory, I am swift, determined, and courageous, armed with a fierce will to win. Never will I fail my country’s trust. Always I fight on…through the foe, to the objective, to triumph overall. If necessary, I will fight to my death. By my steadfast courage, I have won more than 200 years of freedom. I yield not to weakness, to hunger, to cowardice, to fatigue, to superior odds, For I am mentally tough, physically strong, and morally straight. I forsake not, my country, my mission, my comrades, my sacred duty. I am relentless. I am always there, now and forever. I AM THE INFANTRY! FOLLOW ME!
Reply



Possibly Related Threads...
Thread Author Replies Views Last Post
Exclamation Google artificial intelligence razzes us! Eclectic 11 1922 November 5, 2022 at 11:14 am
Last Post: Eclectic



Users browsing this thread: 1 Guest(s)