One of those nasty little questions that at first seems simple until you give it hard thought.
Personally, I think a machine has rights if it can be made to feel harm, and through that extension, you can apply a golden rule.
If the machine is capable of being harmed (and comprehend that harm) in a manner of which you would not want to be harmed yourself in the same situation, then it has that right.
However, I think true self-aware computerised intelligence would appear completely alien to ourselves. I think the error in a lot of AI creation is trying to make computers act like humans, despite a completely different construction and methodology of comprehension.
In this way, intelligence should be viewed in the same way we would approach alien intelligence. If the method of intelligence is substantially different, with different motivations behind the decision making process, then it is important, in terms of preventing harm that would to us be immoral.
Asimov got there a long time ago, with I, Robot.
Quote:1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Without restriction, our own values, and self-protection would be at risk if you allowed an alien, or robot intelligence too much power over mankind.