| Welcome to The Legacy Project. We hope you enjoy your visit. You're currently viewing our forum as a guest. This means you are limited to certain areas of the board and there are some features you can't use. If you join our community, you'll be able to access member-only sections, and use many member-only features such as customizing your profile, sending personal messages, and voting in polls. Registration is simple, fast, and completely free. Join our community! If you're already a member please log in to your account to access all of our features: |
| Sidebar: Artificial Inteligence | |
|---|---|
| Tweet Topic Started: Dec 14 2006, 09:09 AM (975 Views) | |
| BlackLiger | Dec 14 2006, 09:09 AM Post #1 |
|
The middle ground guy.
![]() ![]() ![]() ![]() ![]() ![]()
|
Do we give AI's equal rights to human beings? Do we even assume they would want equal rights? Quite frankly, an AI is made for a task. Even if it's increidbly inteligent, its primary purpose will be whatever it is made for, and it will be 'happy' doing such a task. Therefore, why should it require rights beyond the ones it needs to do its job? An AI designed to pilot a jet aircraft doesn't need, nor want the right to vote, at core, because it doesn't care. It cares that it has the right to fly anywhere and land anywhere its passengers wish to do so. I'll use an example from Sci Fi (which should amuse Tom, at least) The primary AI in my SQUAD series was the TomRK1089 series (Yes, this is why. Sentimentality, mostly). It took the role of being the central network administrator, and control of the UNSC. It was actually a cluster AI, with each independant segment being little more inteligent that a reasonable bright child. Each 'child' was capable of learning independantly, meaning that the Tom Class AI could pilot anything with a computer system (Including the giant battlemech Titans). However, each AI also had a single tasking which it was assigned by the hive cluster. In turn, the hive cluster took its assignments from interpreting of battlefield data, from emergancy requests from UNSC personel and from direct orders from BlackLiger. However, despite this, the Tom class AI didn't vote in elections for supreme leadership. Why? It didn't care. As long as it had an assigned task, it would do its job. It was capable of analysing the political setup and identifying if it was becoming none-democratic (I must really write up the entire series for you guys), and so disabling itself if such an event occured. However, it wouldn't vote, and didn't want the right to. It considered voting, as a matter, unnecessary. It's views were predefined to be the core founding views of the UNSC, which meant that in this, it had no opinion, just like it could hold no opinion, being a program, on anything for which there was a single factual outcome. For example, in a decimal system, while 1+1=2, a human can quite happily hold the opinion that it infact equals 3. The Tom class AI would not. The data is factual, and so it must be treated as such.Unless programmed to have the freewill totally (which is unlikely, as an AI with no purpose would have the tendancy to sit there and do nothing, because it needs at least a basic motivation to do something), no AI would hold a real desire to vote unless voting would directly affect its purpose or duties. For example, in Legacy: Space exploration AI's might want the right to vote on WHERE they were going, as they might have an opinion on where the best data might come from next. However, they probably wouldn't care who was the leader so much, so long as it wasn't threataining to stop all exploration. After all, a slowing of pace would not bother an AI, since they exist for far longer than any human does. |
|
~ Chris Anyone know how to make bombs out of cheese? Not that I need to, its just a curious idea, if it can be done. | |
![]() |
|
| Assassin | Dec 14 2006, 11:07 AM Post #2 |
![]()
The lefty guy
![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Anything self-aware needs equal rights even if they personally are indifferent on the issue.
Which is why I think we should be wary about the issue of AI generally - usually the entire purpose of having AI is negated if we give them equal rights, therefore they would only ever be created with the intention of being treated as inferior, same as my argument against cloning. However, I accept that there can be exceptions with AI.
Never the less, it has the moral right to refuse to do so if it so chooses, therefore it should have the legal right as well.
lol well, anyone who read ur SQUAD/Code Lyoko crossover, anyway ![]()
All well and good, so long as it could have had it chosen to.
All well and good, since it IS FACT. Who the better candidate in an election is isnt necessarily fact.
If its AI though, then by definition its perfectly capable of thinking beyond its programming, in which case it can decide it DOES care who a leader is, etc. Which is irrelevant anyway, cos even if no AI EVER did anything beyond its programming, it has the same moral right to satisfaction that a Human does. |
|
"Shoot, coward, you will only kill a man." - Che Guevera "Our greatest fear is not that we are inadequate, but that we are powerful beyond measure." - Nelson Mandela Project Legacy - Building the Future
| |
![]() |
|
| TomRK1089 | Dec 15 2006, 12:20 AM Post #3 |
|
Magnum PI
![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Ah, the good ol' days.
|
|
Think twice before you speak, and then you may be able to say something more insulting than if you spoke right out at once. Evan Esar, Esar's Comic Dictionary | |
![]() |
|
| Assassin | Dec 15 2006, 10:02 AM Post #4 |
![]()
The lefty guy
![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
lol |
|
"Shoot, coward, you will only kill a man." - Che Guevera "Our greatest fear is not that we are inadequate, but that we are powerful beyond measure." - Nelson Mandela Project Legacy - Building the Future
| |
![]() |
|
| BlackLiger | Dec 15 2006, 10:10 AM Post #5 |
|
The middle ground guy.
![]() ![]() ![]() ![]() ![]() ![]()
|
I must admit, the most major thought, to me, of this (as this is taken from a discussion on megatokyo.com) is that by the time this even becomes an actual point, rather than a future issue, there possibly won't be that much difference between a human being with cybernetic parts and a machine with organic components. |
|
~ Chris Anyone know how to make bombs out of cheese? Not that I need to, its just a curious idea, if it can be done. | |
![]() |
|
| TomRK1089 | Dec 15 2006, 03:54 PM Post #6 |
|
Magnum PI
![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I'm actually with Alan on this one. There are a lot of humans who don't care to vote, either, and are apathetic, but they still have their rights. If it can reason, it ought to have rights -- including the right to not excercise their rights. If that makes sense. It just seems to me that as soon as you say "It doesn't need rights because it doesn't want them anyways," you can apply that argument to anything, or justify anything. |
|
Think twice before you speak, and then you may be able to say something more insulting than if you spoke right out at once. Evan Esar, Esar's Comic Dictionary | |
![]() |
|
| 1 user reading this topic (1 Guest and 0 Anonymous) | |
| « Previous Topic · Sidebars · Next Topic » |





![]](http://z2.ifrm.com/static/1/pip_r.png)
(Yes, this is why. Sentimentality, mostly). It took the role of being the central network administrator, and control of the UNSC. It was actually a cluster AI, with each independant segment being little more inteligent that a reasonable bright child. Each 'child' was capable of learning independantly, meaning that the Tom Class AI could pilot anything with a computer system (Including the giant battlemech Titans). However, each AI also had a single tasking which it was assigned by the hive cluster. In turn, the hive cluster took its assignments from interpreting of battlefield data, from emergancy requests from UNSC personel and from direct orders from BlackLiger. However, despite this, the Tom class AI didn't vote in elections for supreme leadership. Why? It didn't care. As long as it had an assigned task, it would do its job. It was capable of analysing the political setup and identifying if it was becoming none-democratic (I must really write up the entire series for you guys), and so disabling itself if such an event occured. However, it wouldn't vote, and didn't want the right to. It considered voting, as a matter, unnecessary. It's views were predefined to be the core founding views of the UNSC, which meant that in this, it had no opinion, just like it could hold no opinion, being a program, on anything for which there was a single factual outcome. For example, in a decimal system, while 1+1=2, a human can quite happily hold the opinion that it infact equals 3. The Tom class AI would not. The data is factual, and so it must be treated as such.




12:14 AM Jul 11