-
Notifications
You must be signed in to change notification settings - Fork 8
/
0005.html
75 lines (67 loc) · 2.88 KB
/
0005.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
<!-- received="Thu Oct 1 09:10:03 1998 MST" -->
<!-- sent="Thu, 1 Oct 1998 07:51:00 -0700 (PDT)" -->
<!-- name="Joe Jenkins" -->
<!-- email="[email protected]" -->
<!-- subject="Re: Economic Growth Assuming Machine Intelligence" -->
<!-- id="[email protected]" -->
<!-- inreplyto="Economic Growth Assuming Machine Intelligence" -->
<!-- version=1.10, linesinbody=32 -->
<html><head><title>extropy: Re: Economic Growth Assuming Machine Intelligence</title>
<meta name=author content="Joe Jenkins">
<link rel=author rev=made href="mailto:[email protected]" title ="Joe Jenkins">
</head><body>
<h1>Re: Economic Growth Assuming Machine Intelligence</h1>
Joe Jenkins (<i>[email protected]</i>)<br>
<i>Thu, 1 Oct 1998 07:51:00 -0700 (PDT)</i>
<p>
<ul>
<li> <b>Messages sorted by:</b> <a href="date.html#5">[ date ]</a><a href="index.html#5">[ thread ]</a><a href="subject.html#5">[ subject ]</a><a href="author.html#5">[ author ]</a>
<!-- next="start" -->
<li><a href="0006.html">[ Next ]</a><a href="0004.html">[ Previous ]</a>
<b>In reply to:</b> <a href="0001.html">morris/arla johnson</a>
<!-- nextthread="start" -->
<b>Next in thread:</b> <a href="0009.html">Robin Hanson</a>
</ul>
<!-- body="start" -->
<p>
---Robin Hanson <[email protected]> wrote:
<p>
<a name="0009qlink1"><i>> Economic Growth Assuming Machine Intelligence</i><br>
<a href="0001.html#0005qlink1">> http://hanson.berkeley.edu/aigrow.pdf , .ps</a><br>
<i>> by Robin Hanson</i><br>
<p>
Why have you assumed a state of slavery for the sentient beings? How
</a>
would self ownership and a right to ones own wages, for the machine
intelligences change your results?
<p>
<a name="0009qlink2">Another assumption I'm still grappling with, is that the labor
population of machine intelligences could grow as fast as desired to
meet labor demand. I'm not so sure human or greater intelligences
wouldn't have a problem once they start seeing thousand of copies of
themselves in the population. Although, all you need to find is one
willing participant, this may still be a problem. Could you find one
human who is willing to undergo constant torture? Would the world be
satisfied if it starts seeing skyrocketing suicide rates? I know most
will disagree with me on this, but I think there might be some
socio-political unknowns that are not at all foreseeable that might
stop this dead in its tracts.</a>
<p>
Joe Jenkins
<br>
<hr>
<br>
DO YOU YAHOO!?
<br>
Get your free @yahoo.com address at <a href="http://mail.yahoo.com">http://mail.yahoo.com</a>
<!-- body="end" -->
<p>
<ul>
<!-- next="start" -->
<li><a href="0006.html">[ Next ]</a><a href="0004.html">[ Previous ]</a>
<b>In reply to:</b> <a href="0001.html">morris/arla johnson</a>
<!-- nextthread="start" -->
<b>Next in thread:</b> <a href="0009.html">Robin Hanson</a>
</ul>
</body></html>