SETI bioastro: Jordan Pollack Answers AI And IP Questions

From: Larry Klaes (lklaes@bbn.com)
Date: Sat Apr 15 2000 - 10:52:49 PDT


Date: Sat, 15 Apr 2000 00:57:22 -0700 (PDT)
From: Eugene Leitl <eugene.leitl@lrz.uni-muenchen.de>
Reply-To: transhumantech@egroups.com
Subject: [>Htech] /. Jordan Pollack Answers AI And IP Questions

http://slashdot.org/article.pl?sid=00/04/11/0722227&mode=thread

Posted by Roblimo on Wednesday April 12, @10:00AM
from the learning-from-the-leaders dept.

Professor Pollack put a lot of time and thought into answering your
questions, and it shows. What follows is a "deeper than we expected"
series of comments about Artificial Intelligence and intellectual
property distribution from one of the acknowledged leaders in both
fields.

How do you justify your expectations? (Score:5, Interesting) by
Anonymous Coward

For the past 40 years, AI has just been 10 years or so away.

It's still just 10 years or so away.

It's not getting any closer.

How do you justify any degree of optimism about the future of AI at
this point? What makes now fundamentally different from anytime in the
past 40 years?

It is funny, this is the same question I asked Marvin Minsky, the
father of AI, at ALife 5 in Japan. He attacked every modern approach,
including neural nets, fuzzy logic, evolutionary algorithms, and so on
for over an hour, suggesting that his student's (Winston's) thesis
should have been the paradigm of the field! I asked, "If AI sucks so
much, why are you still in the field after 40 years?"

Hypocrite! Here I am, still in the field after 20 years! As soon as
I've convinced myself one approach to AI is too slow, I find another,
leaving quietly without attacking the friends I've made. AI is a big
wide open field with a lot of smart people trying different
things. (Savage attacks by insiders exiting are the worst thing in
science, such as Bar Hillel's attack on Machine Translation in the
60's. Forty Years later, MT is "cool" again, in this month's issue of
Wired.)

So I can say that, from my perspective as having worked on many
different approaches to AI, writing problem space search algorithms
for solving puzzles will not result in a general problem
solver. Automating predicate logic won't make a computer equivalent to
a philosopher. A computer can't do natural language any better than
Eliza, without an internal need to communicate to survive and a large
blessing of custom hardware. Neural nets are great function
approximators with good mathematical results on limited kinds of
learning, but we can't set 12 weights to get what we want, let alone
10 billion weights. And even though simple nonlinear systems give off
chaos and fractals, Kolmogorov's law tells us simple systems are still
simple. Evolution is one path to complexity, but most genetic
algorithms simply search a finite search space and optimize a fixed
goal.

So I'm locally pessimistic but globally optimistic! Who said AI is 10
years away? It's here now, in limited forms, yielding a lot of
economic value, as your mouse clickstream is datamined so the ads
which pop up are for things you might actually buy. But the SF ideal
of a humanoid robot like Commander Data is centuries away.

I hold the view that any system which responds to its environment in a
conditional way based on some internal state, even a thermostat, has a
bit of intelligence. Immune systems, ecologies, and economies design
things and solve problems. Every computer program you write has a bit
of intelligence captured in it. The problem is, real AI of the sort
you are alluding to is an organization which might be realizable as a
10 billion line program or a 10 billion weight dynamical neural
system, and no human software engineering team can write autonomous
code which is more than 10-100 million lines. Even Windows is just
DOS with wallpaper, and big applications always require a human in the
loop, selecting subprograms from menus or command lines.

Since 1994, we've been working on how to automatically evolve physical
symbol systems which would have 10 billion unique moving parts, what
we call "Biologically Complex" systems. When I say "We," it is because
everything I do is in collaboration with my Ph.D students! A 10
Billion Line program is an absurd goal obviously, but it drives our
research to focus in on the process of growth itself, rather than on
what shortcuts we can accomplish by hand. We look at co-evolution,
which involves machine learners training each other, and on questions
of what kinds of substrates for computing could provide a universe of
functionality while being constrained in a way which reduces the size
or dimensionality of the search space. This constraint is called
inductive bias. We seek minimal inductive bias systems, in which the
human hints, or "gradient engineering" tricks are fully explicit.
(Sevan Ficici, Richard Watson) We still work on neural nets and
fractals as a substrate, and have made some progress in understanding
how they work (Ofer Melnik, Simon Levy).

It's been more than five years, and while we are not even at the
million line mark yet, I am still optimistic and haven't given up on
co-evolution to move to a new field. I think that my lab has made
progress in understanding why Hillis's sorting networks and Tesauro's
Backgammon player were such breakthroughs and where they were
limited. (Hugue Juille, Alan Blair). I think we have begun to
understand the nature of mediocrity as an attractor in educational
systems and how to change the utility functions to avoid collusion,
and apply this to human learning (Elizabeth Sklar). We have become
more applied, bring co-evolution to the Internet and to robotics,
replicating and extending the beautiful results of Karl Sims from 1994
(Pablo Funes, Greg Hornby, Hod Lipson). All the work is available to
study at the laboratory's Web site.

AI and ethics. (Score:5, Interesting) by kwsNI

What do you say to the people that feel it is unethical to try to
create "intelligence"?

I take this as a shorter version of the longer religious question the
editor thankfully didn't select. I've talked to my rabbi, perhaps one
of the great theologians around today. Even though I am an atheist, he
thinks I am on a spiritual quest to understand [God as] the principles
of the universe which allow self-organization of life as a chemical
process far from equilibrium which dissipates energy and creates
structure that exploits emergent properties of physics. Can a
spiritual quest be unethical? I suggest that people with this question
read Three Scientists and Their God, by Robert Wright, or watch the
Morris documentary "Fast, Cheap and Out of Control".

A second ethical question, besides usurping God's rights, is how can
you take funding from national and military agencies like NSF, Darpa
and ONR? For the past 50 years at least, they have been the seed
capital for the science behind most of the technological progress I
know about. With the venture capital economy, that curiosity-based
seed function may be privatized, if some of the big VC funds dedicate
10% for long range science, and the ethical question of whether you
are doing something for public good or private gain begins to dominate
over the religious and military questions. That is the same question
many scientists and Linux hackers ask themselves daily: Can I do good
and make money without a conflict of interest?

Turing award. (Score:5, Funny) by V.

Do we win something if we can fool him into answering a
computer-generated question? ;)

It has always been the case that limiting the range of dialog leads to
more successful masquerading. In our CEL online educational game, for
example, the only interactions between players are the actual plays,
which enables artificial agents to be accepted as game partners.

BTW, the Turing Award is an annual lifetime achievement award in
computer science, which has gone to people like John Backus for his
eloquent apology for Fortran when he should have given us APL and
LISP. The Turing Test is the name given to Alan Turing's proposal for
testing for successful AI. Given that we don't deny airplanes fly, I
think if AI ever flies, we won't question it. So I propose using the
Louis Armstrong Test, his answer to the question "What is jazz?"

How should an amateur get started working on AI?
(Score:5,Interesting) by Henry House

It seems to me that a significant problem holding back the development
of AI is that few non-professionals grok AI well enough to offer any
contribution to the AI and open-source communities. What do you
suggest that I, as a person interested in both AI and open source, do
about this? What are the professionals in the AI field doing about
this?

Reading is fundamental.

Frankenstein (Score:5, Interesting) by Borealis

For a long time there has been a fear of a Frankenstein being
incarnated with AI. Movies like The Matrix and the recent essay by
Bill Joy both express worries that AI (in the form of self replicating
robots with some AI agenda) can possibly overcome us if we are not
careful. Personally I have always considered the idea rather
outlandish, but I'm wondering what an actual expert thinks about the
idea.

Do you believe that there is any foundation for worry? If so, what
areas should we concentrate on to be sure to avoid any problems? If
not, what are the limiting factors that prevent an "evil" AI?

AI doesn't kill People. AI might make guns smart enough to sense the
weight or handsize of the user, preventing children from killing each
other. Everything ever invented is capable of good or evil. Evil
arises most often when masses of humans are denied fundamental
rights. The Evil Rate and Unemployment Rate are closely linked.

I read Bill Joy's article in Wired last month. And I loved the
Unabomber's excerpt because it is based on some of the best Philip
Dick paranoid Science Fiction, like: Vulcan's Hammer, We Can Build
You, and the Simulacrum. There is a lot of SF on the Golem question
and one of my favorites is Marge Piercy's He, She, and It, which
proposes a moratorium on AI inside humanoid robots. You can have smart
software on the Web, and human looking idiobots, but you can't put
real AI inside human looking robots, or you have to pay the price.

My lab is indeed working on self-replicating robotics and were worried
for a split second about getting the fetal brain tissue reaction when
our paper comes out shortly. We can now envision the "third
bootstrap", after precision manufacturing and computation, where
machines make the machines which make themselves, just as machine
tools are used to make more machine tools, and computers compile their
own programs. But the replication loop is quite a sophisticated
automatic manufacturing process, which requires a large industrial
infrastructure, and a lot of liability insurance. So far, no VC's,
Saudi Princes, or government agencies have offered the necessary $500M
first round of financing for fullyautomateddesign.com.

It would be wrong of me to say leave my frankenbots alone, and go
after frankenfoods and frankenano. I think Joe Weizenbaum's book
should be required reading, because every few years somebody else
comes up with the idea of inserting computers inside animal bodies, so
that the first act of any war will be to exterminate all nonhuman life
forms. But I do think we have to worry more about large scale
industrial and agricultural processes which are allowed to externalize
their by-products affecting the environment, than we need worry about
robotic ice-9. We will die quicker from e-mail spam caused by viral
marketing customer acquisition schemes or from global warming and
ozone depletion triggering major climactic change, red tide or another
pollutant taking out fish from the food chain, or even from people
throwing away old EGA screens and 386 motherboards in landfills,
poisoning the aquifers. I promise that for every robot we build, there
will be another robot to recycle it when its job is complete.

Anyhow, IMO Joy's angst must reflect the Sun setting on any
instruction set architecture besides x86, but that's a different
discussion. Talk to me about the ethics, when your very own open
source movement leads to the inevitability of an Intel instruction set
monopoly by providing a useful alternative to Microsoft :)

Questions based on your academic path (Score:5, Interesting) by
Anonymous Coward

The way to the field of AI isn't always extremely clear. What type of
background do they expect? Is it mostly a researching position or is
it treated like a normal job with normal goals? Are there any classes
or subjects or schools you recommend to make it into the AI field?
Also, how exactly did you get into the field? How did AI intrigue you
into what you do now, despite all the controversy to create an
intelligence that could possibly be considered a "god" compared to the
human existence? Very interesting to say the least, and something I'm
interested in.

There is no AI business field to speak of which is differentiated from
the general software business. Most companies which were "AI
companies" in an earlier generation of university spin-offs for Lisp
Machines, and Expert Systems Shells, failed miserably. Venture
Capitalists won't fall into the same sinkhole twice. There are
industrial process control companies which use refined bits of AI,
e.g. in visual inspection of manufacturing processes, and Neural
Network companies, like HNC, who have changed business plans and are
now "pattern-recognition e-commerce security." companies. The Speech
recognition industry has condensed into one company. Web- based AI
means search engines and Language Engines. Ask Jeeves and Google and
Direct Hit and many others may use bits of AI and adaptive
technologies in their system.

Jobs in AI are just like software jobs everywhere: chain you to a
workstation and make you work out boring details in exchange for
salary and very little equity. But find a great graduate program in
computer science, and you will likely find fun and exciting work for
no salary and no equity! And you have to be great at both real and
discrete mathematics as well as a natural born programming genius.

As for me, I started programming computers in APL as a freshman in
college, and because it was such a high level language and I didn't
sleep much, I wrote an awful lot of code in a few years. I was
naturally drawn to building heuristic puzzle solvers, game players,
and logical theorem provers. Before I met my wife, friends thought I
was in love with computers. After working at IBM, I went to graduate
school in Urbana and worked with David Waltz on LISP hacking, natural
language processing, and reinvented neural networks, which were
censored from the AI curriculum of the early 80's. I came to the limit
of what could be done with neural networks for intelligence by 1988,
and at Ohio State University, started looking at fractals and chaos as
a source for generativity. Unfortunately, interesting behavior
requires lots of levels and lots of parameters, which is why we
started looking at evolution for selecting and adjusting lots of
parameters, a focus since I've been at Brandeis.

While there is a lot of detailed work and dead ends, the search for
mechanical intelligence is one of the great unsolved problems, which
is in some way deeply equivalent to questions on the origin of life,
human language, morphogenesis, child development, and human cultural
and economic change. John Casti's book is a great place to start
reading about these big problems.

Human brain - AI connection - is there? (Score:5, Interesting)

Do you think that a greater understanding of the human brain and how
intelligence has emerged in us is crucial to the creation of AI, or do
you think that the two are unconnected? Will a greater understanding
of memory and thought aid in development, or will AI be different
enough so that such knowledge isn't required?

Also, what do you think about the potential of the models used today
to attempt to achieve a working AI? Do you think that the models
themselves (e.g. the neural net model) are correct and have the
potential to produce an AI given enough power and configuration, or do
you think that our current models are just a stepping stone along the
way to a better model which is required for success?

Obviously there are clear medicinal benefits to brain research. And
the study of any real biological system leads to interesting metaphors
which can be the basis for a novel computational model. But I think it
is unlikely that research into the biology of the brain is crucial to
understanding cognition or replicating intelligence. It's like
studying the width of wires in integrated circuits of a computer. Even
if you get the whole wiring diagram for a computer, it still tells you
little about the programs running on it. I think understanding the
brain is a problem which is underestimated. I heard 25,000 scientists
attend the annual Neurosciences meeting, three times the largest ever
interested in AI. It could be called the Mandelsciences meeting, and
different labs compete to describe what they find in those little
windows on the Mandelbrot set! But I have a lot of friends who are
neuroscientists, and I can be just as facetious about linguistics.

Seriously, I believe we have to understand and replicate the processes
which lead to the development of the brain and its behavior, not
replicate the mammalian brain itself.

The second part of your question "how intelligence has emerged in us"
can be interpreted as a more interesting direction. Here, there is a
lot of opportunity to relate human intelligence as animal intelligence
plus a little more. The fields of evolutionary epistemology, adaptive
behavior, and computational neuroethology are quite interesting. It is
a great question to understand cognition as it appears in other
animals, insects, worms, and even bacterial colonies. The basic
principles of multicellular cooperation are more important than the
millions of specific adaptations of the human brain.

As for models question, it is sort of like asking whether a chair is
built out of metal, wood, plastic, rubber, or cardboard. It doesn't
matter, as long as it are strong enough. The organization of molecules
has to provide a surface and a normal force at the right height for
sitting. As for the organization of 10 billion things which might make
an AI? Doesn't matter if it is c, java, lisp, neurons, or tightly
coupled markovian 2nd order polynomial fuzzy sets. Will it stand, or
collapse under its own weight?

most likely path? (Score:5, Interesting) by Anonymous Coward

Dr Jordan:

Do you think that AI is more likely to arise as the result of explicit
efforts to create an intelligent system by programmers, or by
evolution of artificial life entities? Or on the third hand, do you
think efforts like Cog (training the machine like a child, with a
long, human aided learning process) will be the first to create a
thinking machine?

We are taking the second path, seeking the principles for
self-organization so we can harness them to create and invent forms of
organization.. There is a 4th path you don't mention, which is the
terminator/Truenames hypothesis, that AI will simply arise among the
powerful router machines of the internet. How would we recognize
coherent behavior arising in telecom infrastructure if it didn't wake
up talking English? I think a SETI for coherent intentional behavior
emerging out of the infrastructure would be a fun project to do for
the people worrying about risks to the information infrastructure.

Software Market & Open Source (Score:5, Insightful) by Breace

In your 'hyperbook' about your idea of a software market I noticed
that you say that Open Source evangelists should support your movement
because it will be (quote) A way for your next team to be rewarded for
their creative work if it turns into Sendmail, Apache, or Linux.

I assume (from reading other parts) that you are talking about a
monetary reward. My question is (and this is not meant as a flame by
any means), do you really think that that's what the Open Source
community is after, after all? Do you think that people like Torvalds
or RMS are unhappy for not being rewarded enough?

If the OS community doesn't care about monetary rewards, is there an
other benefit in having your proposed Software Market?
  

According to economic theory, utility is what motivates you to make
decisions in your own self interest. Simple games, like the prisoner's
dilemma, rationalize utility with numeric values to illustrate the
concept, but it isn't money at all. If someone behaves in an
unpredictable way, we must have our definition of their utility wrong.

There are plenty of motivations for writing open source code,
including the challenge and the feeling of altruism, both of which
have utility. A lot of people may write open source for credit in the
community, which also has utility. If RMS was a radical advocate of
anonymity who wrote the GPL so you couldn't put your name on the
source code because it promoted the glorification of the individual,
participating might provide less utility.

Why not Write a Screensaver? (Score:5, Interesting) by peteshaw

First of all, it is indeed an honor to pester a big name scientist
with my puny little questions! Hopefully I will not arouse angst with
the simplicity of my perceptions. Aha! I toss my Wheaties on Mount
Olympus and hope to see golden flakes drift down from the sky!

I have always thought that distributed computing naturally lends
itself to large scale AI problems, specifically your Neural Networks
and Dynamical Systems work. I am thinking specifically of the
SETI@home project, and the distributed.net projects. Have you thought
about, or to your knowledge has anyone thought about harnessing the
power of collective geekdom for sort of a brute force approach to
neural networks. I don't know how NN normally work, but it seems that
you could write a very small, lightweight client, and embed it into a
screen saver a'la SETI@home. This SS would really be really a simple
client 'node'. You could then add some cute graphics like a picture of
a human brain and some brightly colored synapses or what have you.

Once the /.ers got their hands on such a geek toy I have no doubt
you'd have the equivalent of several hundred thousand hours or more of
free computer time, and who knows, maybe we could all make a brain
together! I would love to think of my computer as a small cog in some
vast neural network, or at least I would until Arnold Schwarzenegger
got sent back in time to kill my mom. Whaddayathink, Jordan? Is this a
good idea, or am I an idiot?

No, its very imaginative. You could be one of my AI grad students. But
rather than focusing on neural networks, which, because of matrix
multiplication, do not distribute well, people are looking at such
systems for evolutionary computation. You can evolve individuals on
networked workstations and collect them, or evolve populations which
interact occasionally and pass dna around. Look at Tom Ray's Net
Tierra project to see how it is going. My colleague Hod Lipson is
developing a screensaver for our evolutionary robotics project, but
release 1 will be Windows rather than Linux compatible (./sorry)

Actually, one of my early business plans for the Internet, circa the
first working java browsers, was to show naughty pictures while
harvesting cycles from your computer and reselling them to people
needing computer time. All was needed was an assembly language
interpreter in java and some interfacing. The problem is that most
computationally intense problems people want to solve have large data
flow requirements which conflict with the download of the naughty
pictures! When I recently tried to corner the market in pig latin
domain names for my new "incubator", panies.com panies.com, I didn't
secure putation.com because it sounded bad. One week later I realized
it was a pretty good name for a distributed computation service, but
somebody else had grabbed the URL!

However, there is a critical piece missing from all these
visions. intelligence is a property of an organization of computation,
it is not computation itself. The problem of robotics is not the
limited power of microcomputers, since we could drive any robot from a
supercomputer if we knew what to write! We can get infinite cycles
already, but nobody can write a coherent program bigger than 10M
lines. We have figure out to use cycles towards discovery of a process
of self-organization, rather than on a known software organization
itself.

AI Metrics (Score:5, Interesting) by john_many_jars

I have read several coffee table science books on the subject and
often find myself asking for a way to measure AI. As has been noted,
AI is always elusive and is just around the corner. My question is how
do you gauge how far AI has come and what is AI?

For instance, what's the difference between your TRON demonstration
and a highly advanced system of solving a (very specific) non-linear
differential equation to find relative and (hopefully absolute)
extrema in the wildly complicated space of TRON strategies? Or, is
that the definition of intelligence?

This is a very hard question which I won't be able to joke my way out
of. I think that system performance in specific domains can be
measured, like a rating system for a game like TRON. I think we might
be able to get a measure of the generative capacity of a system in all
possible environments, by capturing strings of symbols representing
different actions, and looking at the grammar of behavior. In general,
however, observers have an effect on their observations of
computational capacity. I usually think of intelligence as a
measurement, not the thing being measured, sort of like the difference
between temperature and heat, or weight and mass. It could be a
measurement of operational knowledge (programmed, not static in a
database), or of efficient use of knowledge resources. This
measurement is applied to an organization. So committees of very smart
people can operate idiotically, and groups of dumb insects can be very
intelligent.

My current best working definition is that intelligence is the ratio
of the amount of problem-solving accomplished to the number of cycles
wasted. When I say we need 10B lines of code, it is not to say that
raw program size is a measure of intelligence, but to express the idea
that inside that code are enough different heuristics and gizmos to
solve lots of problems effectively.

And what about Freedom? (Score:5, Insightful) by Hobbex

Mr. Pollack,

I read your article about "information property" and was surprised to
find you dealt with the matter completely from the point of view of
advancing the market. Their are those of us who would argue that the
wellbeing of the market is, at most, a second order concern, and that
the important issues that Information age gives rise regarding the
perceived ownership of information are really about Freedom and
integrity.

These issues range from the simple desire to have the right to do
whatever one wants with data that one has access to, to the simple
futility and danger of trying to limit to paying individuals something
that by nature, mathematics, and now technology is Free. They concern
the fact that our machines are now so integral in our lives that they
have become a part of our identity, with our computers as the
extension of ourselves into "cyberspace", and that any proposal which
aims to keep the total right to control over everything in the
computer away from the user is thus an invasion into our integrity,
personality, and freedom.

Do you consider the economics of the market to be a greater concern
than individual freedom?

This is a beautiful question, thank you. My book is exactly about
freedom and rights: The freedom to sell a copy of a book you are done
reading. The freedom to share in the rewards when something you design
or write is in demand by millions of people. The right to own what you
buy.

I see an inexorable movement towards dispossessionism, both coming
from the "right," with UCITA, secured digital rights,
anti-crypto-tampering in the DMCA, and ASP subscription models, and
coming from the "left", with ideas that we should give our writing up
into free collectivist projects.

The Internet is the beginning of Goldstein's "celestial jukebox," the
encyclopedia of everything anyone has ever written, every episode of
every TV show, and every song by every band. It sounds wonderful until
you realize that you will have to pay per view! Bill Gates now has the
money to deploy satellites which will force you to rent his word
processor for $1/hour, the same rate for renting a movie. The laws on
theft of satellite programs, unfortunately, as legal doctrine goes,
considers decoding satellite broadcasts as theft of cable services,
rather than as protected first amendment rights to receive radio
broadcasts. Once secure distribution of programs on a rental basis is
established, all content publishing will move inexorably into that
mode to maximize profits. No more books, no more records. No more
ownership. Dispossession.

The Free software movement, League for Programming Freedom, Open
Source Software, on the other hand, talk idealistic young individuals
out of their writing. "Contribute it towards a greater good." Be
rewarded by occasional e-mails of thanks from your peers. The Free
Music movement, or "let's RIP our CD's and trade MP3s through Napster"
isn't as politically as economically motivated, but is also making
musicians contribute their work for the greater good, at least of
dormitories! Dispossession.

Fascism and Communism, while they have philosophical appeal for their
mimetic simplicity, have proven themselves consistently the enemies of
freedom, enterprise and creativity. Ordinary people are "dispossessed"
of their property, which ends up, not surprisingly, in the pockets of
the promoters of the simple philosophy.

My purpose in writing License to Bill is to begin a discussion not
only on a societal remedy to the microsoft problem, but to secure, as
a human right, the right to own information properties I buy, rather
than just being able to rent them. I especially want the right to own
and sell copies of my own creations, and to own a library of other's
creations, reasonably priced based on supply and demand, without fear
that a change in technology will render my investments worthless..

A market is just a mechanism which humanity uses to allocate resources
fairly. It is neither good nor evil.

To which I would add... (Score:5, Interesting) by joss

I also read your IP proposal, and agree with the points mentioned
above.

However, I also have a problem with your proposal from an economic
perspective:

Property laws developed as a mechanism for optimal utilization of
scarce resources. The laws and ethics for standard property make
little sense when the cost of replication is $0. The market is the
best mechanism for distributing scarce resources, so you propose we
make all IP resources scarce so that IP behaves like other commodities
and all the laws of the market apply.

We are rapidly entering a world where most wealth is held as a form of
IP. Free replication of IP increases the net wealth of the planet. If
everybody on earth had access to all the IP on earth, then everybody
would be far richer - it's not a zero sum game. Of course, we're
several decades at least from this being a viable option since we've
reached a local minima. (Need equivalent to starship replicators first
- nanotech...)

Artificially pretending that IP is a scarce resource will keep the
lawyers, accountants, politicians in work, and will also allow some
money to flow back to the creatives, but at the cost of impoverishing
humanity.

I could actually see your proposal being adopted, and I can see how it
will maintain capitalism as the dominant model, but I also believe
that it is the most damaging economic suggestion in human history

Could you tell me why I'm wrong.
  

Wow! "I also believe that it is the most damaging economic suggestion
in human history" Surely this is a wonderful compliment.

The history and future of money is very interesting, and one you can
read about in various books, including one by Milton Friedman, and one
from the Cato Institute. I think today's software houses who force
upgrades on their customers are like the wildcat banks of the
nineteenth century, printing up banknotes, and then declaring
bankruptcy, vanishing with the deposits and setting up shop in another
town.

Before money, there was simply trade in raw and polished goods. Then
there was weighing and coinage. Lots of people thought coins were the
real value and heartily resisted paper money. The gold and silver
standards gave way, and eventually the idea that there was gold for
every dollar bill was revealed as a hoax, and now "money" is simply a
record in your bank's computer that there is a certain amount you are
entitled to withdraw based on the amounts other banks have deposited
for you. The only essential different between a rich and poor person
is what the bank computers and the registrar of deeds say it is,
backed by military force. And the money supply and international
exchanges now somehow represents our national wealth with respect to
other nations, and other nation's confidence that our banking system
isn't duplicating dollars. Instead of objects of trade, money is
information about potential trade.

While you might not like the idea that money is abstract and in
limited supply, and you have more or less than you want, it is the
soft underbelly of "Starship Economics" that Gene Roddenberry died
before coming up with the backstory for how to have a non-mediocre
society with unlimited replication for all.

I once invented a transporter machine for paper using public key
crypto and fax technology. It would hold the source paper in a metal
box, verify the copy was printed, and then destroy the original and
legitimize the copy. With this system, you could fax a dollar bill to
a friend! Now: is a dollar bill is just the likeness of a dollar bill
on a crinkly piece of thermal paper, or the actual piece of green
stuff? If Paypal can figure out how you can beam money from your
palmpilot to mine, but a bug lets you keep a copy of the money, I bet
their valuation would go way down.

I am simply saying that permanent use and resale licenses to
changeable information (software, art, literature, music, movies)
which can be traded securely, without loss or duplication, in a public
market, is a form of currency.

Unlimited replication of currency just doesn't work, any more than two
copies of William Shatner.

I stake the middle ground. Both the "right" copyright publishers who
make currency loss through expiring keys and forced upgrades, and the
"left" copyright violators who duplicate currency, will be welcome at
my table when they see the light.

----------

Thanks for your interesting questions. My comments do not reflect the
official position of my employer Brandeis University, the sponsors of
my laboratory's research, or the companies i am involved with, Abuzz,
Xilicon, or Thinmail.

Humbly yours,
Jordan Pollack
Bigname@scientist.com
P.S. you too can be a scientist thanks to mail.com:)



This archive was generated by hypermail 2b30 : Wed Mar 28 2001 - 16:07:53 PST