roko's basilisk
-
zamaninda less wrong'a post edilmis. millet kafayi yiyince eliezer yudkowsky tum thread'i silmis, konunun tartisilmasini da yasaklamis.
utilitarianism, timeless decision theory, acausal trade ve friendly ai tezlerini kabul ederseniz kolayca follow ediyor. zaten ortaya atan eleman da "keske bu tezi ileri surmemi saglayan altyapiyi hic edinmeseydim" gibi bir laf etmis. ogrenince psikolojisi bozulanlar olmus o yuzden, proceed at your own risk
intro: http://www.slate.com/…t_experiment_of_all_time.html
oldurucu vurus:
--- spoiler ---
but what do you care, if it’s only torturing a future copy of you, not the real you? here’s the thing: your clone will think like you, go through life experiencing the same things as you (before being tortured), and it will feel like you. ıt will think it’s you. the only difference between your copy and you, will be that the real you will get to the end of your life and die, and the copy will at some point be tapped on the shoulder by the aı, and tortured. up until that point, the replica version of you will have no clue that it isn’t the real you, and that’s the rub:
you might be the replica.
http://www.bn2b.com/…est-scariest-ideas-on-the-web/
--- spoiler ---
detay: http://rationalwiki.org/wiki/roko's_basilisk
bence en guzel debunking'i su:
- bu mantikla, aı'in vegan olmayanlara iskence edecegini de cikarmak mumkun. (pascal's wager reddiyesi gibi.)
ekşi sözlük kullanıcılarıyla mesajlaşmak ve yazdıkları entry'leri
takip etmek için giriş yapmalısın.
hesabın var mı? giriş yap