• Sort Blog:
  • All
  • Book Reviews
  • EA Rotterdam
  • Essays
  • Flotes
  • Goals
  • Links
  • Series
  • Short Stories
  • Uncategorized

Semiosis

Semiosis by Sue Burke explores an interesting idea, what if plants could think and what if we have to live together (in an alliance) with them.

That being said, I thought the book went on way too long and I didn’t feel much connection with the characters. It might be so because I like my sci-fi to be in space, or that the switching between generations just didn’t do it for me.

The book is also about how we want to live together. I think here it might have touched upon some good points, but it didn’t provide any revelations or new insights.

Possible Minds

Possible Minds: 25 Ways of Looking at AI by John Brockman is a collection of essays by leading AI researchers, artists, and philosophers. They all give their own view on the state/future of AI, as/and a reflection on The Human Use of Human Beings by Norbert Wiener. Each essay is quite different and here I’ve tried to summarise them.

One immediate thing I learned/was reminded about is that technology itself will not be a force for good or bad, it is culture that does this. Technology only enables it. (update: from a podcast about Weigers in China, DNA testing can help with your ancestry, it can also enable mass surveillance and profiling)

1. Seth Lloyd: Wrong, but More Relevant Than Ever

Seth Lloyd is a theoretical physicist at MIT, Nam P. Suh Professor in the Department of Mechanical Engineering, and an external professor at the Santa Fe Institute.

“Wiener’s central insight was that the world should be understood in terms of information. Complex systems, such as organisms, brains, and human societies, consist of interlocking feedback loops in which signals exchanged between subsystems result in complex but stable behavior. When feedback loops break down, the system goes unstable.”

Technological prediction is particularly chancy, given that technologies progress by a series of refinements, halted by obstacles and overcome by innovation. Many obstacles and some innovations can be anticipated, but more cannot. In my own work with experimentalists on building quantum computers, I typically find that some of the technological steps I expect to be easy turn out to be impossible, whereas some of the tasks I imagine to be impossible turn out to be easy. You don’t know until you try.”

Raw information-processing power does not mean sophisticated information-processing power. While computer power has advanced exponentially, the programs by which computers operate have often failed to advance at all.”

“As machines become more powerful and capable of learning, they learn more and more as human beings do—from multiple examples, often under the supervision of human and machine teachers. Education is as hard and slow for computers as it is for teenagers. Consequently, systems based on deep learning are becoming more rather than less human. The skills they bring to learning are not “better than” but “complementary to” human learning: Computer learning systems can identify patterns that humans can not—and vice versa.”

2. Judea Pearl: The Limitations of Opaque Learning Machines

Judea Pearl is a professor of computer science and director of the Cognitive Systems Laboratory at UCLA. His most recent book, co-authored with Dana Mackenzie, is The Book of Why: The New Science of Cause and Effect.

Current machine-learning systems operate almost exclusively in a statistical, or model-blind, mode, which is analogous in many ways to fitting a function to a cloud of data points. Such systems cannot reason about “What if?” questions and, therefore, cannot serve as the basis for Strong AI—that is, artificial intelligence that emulates human-level reasoning and competence.”

“Homo sapiens… create and store a mental representation of their environment, interrogate that representation, distort it by mental acts of imagination, and finally answer the “What if?” kinds of questions. Examples are interventional questions (“What if I do such-and-such?”) and retrospective or counterfactual questions (“What if I had acted differently?”). No learning machine in operation today can answer such questions.”

I view machine learning as a tool to get us from data to probabilities. But then we still have to make two extra steps to go from probabilities into real understanding—two big steps. One is to predict the effect of actions, and the second is counterfactual imagination. We cannot claim to understand reality unless we make the last two steps.”

3. Stuart Russell: The Purpose Put into the Machine

Stuart Russell is a professor of computer science and Smith-Zadeh Professor in Engineering at UC Berkeley. He is the co-author (with Peter Norvig) of Artificial Intelligence: A Modern Approach.

“Putting a purpose into a machine that optimizes its behavior according to clearly defined algorithms seems an admirable approach to ensuring that the machine’s “conduct will be carried out on principles acceptable to us!” But, as Wiener warns, we need to put in the right purpose.”

“The technical term for putting in the right purpose [Midas problem] is value alignment. When it fails, we may inadvertently imbue machines with objectives counter to our own. Tasked with finding a cure for cancer as fast as possible, an AI system might elect to use the entire human population as guinea pigs for its experiments.”

“AI research, in its present form, studies the ability to achieve objectives, not the design of those objectives.”

He mentions some objections he then also refutes:

  • Don’t worry we can just switch it off (if AGI, will be smart enough)
  • Human-level or superhuman AI is impossible (see nuclear bombs)
  • It’s too soon to worry about it (not predictable, but start sooner than later)
  • Human-level AI isn’t really imminent, in any case (ditto, not predictable with any certainty, but physically possible)
  • You’re just a Luddite (major technologists argue for safety)
  • Any machine intelligent enough to cause trouble will be intelligent enough to have appropriate and altruistic objectives (from ‘the world’ you can’t ‘see’ our objectives, Bostrom‘s paperclip example)
  • Intelligence is multidimensional, “so ‘smarter than humans’ is a meaningless concept (kinda true, but still no reason it won’t happen)

“A more precise definition is given by the framework of cooperative inverse-reinforcement learning, or CIRL

4. George Dyson: The Third Law

George Dyson is a historian of science and technology and the author of Baidarka: The Kayak, Darwin Among the Machines, Project Orion, and Turing’s Cathedral.

“He likes to point out that analog computing, once believed to be as extinct as the Differential Analyzer, has returned. He argues that while we may use digital components, at a certain point the analog computing being performed by the system far exceeds the complexity of the digital code with which it is built. He believes that true artificial intelligence—with analog control systems emerging from a digital substrate the way digital computers emerged out of analog components in the aftermath of World War II—may not be as far off as we think.”

Digital computers execute transformations between two species of bits: bits representing differences in space and bits representing differences in time.”

Analog computers also mediate transformations between two forms of information: structure in space and behavior in time”

‘This [digital vs analog] is starting to change: from the bottom up, as the threefold drivers of drone warfare, autonomous vehicles, and cell phones push the development of neuromorphic microprocessors that implement actual neural networks, rather than simulations of neural networks, directly in silicon (and other potential substrates); and from the top down, as our largest and most successful enterprises increasingly turn to analog computation in their infiltration and control of the world.”

Nowhere is there any controlling model of the system except the system itself.” (model itself is the system, can’t be reduced or ‘controlled’)

“Before you know it, your system will not only be observing and mapping the meaning of things, it will start constructing meaning as well. In time, it will control meaning, in the same way the traffic map starts to control the flow of traffic even though no one seems to be in control.”

Three laws of robotics (just kidding), of artificial intelligence:

  1. Any effective control system must be as complex as the system it controls (Ashby’s Law)
  2. The simplest complete model of an organism is the organism itself (Von Neumann). Trying to reduce the system’s behavior to any formal description makes things more complicated, not less
  3. Any system simple enough to be understandable will not be complicated enough to behave intelligently, while any system complicated enough to behave intelligently will be too complicated to understand (there is a loophole in the third law. It is entirely possible to build something without understanding it)

Provably “good” AI is a myth. Our relationship with true AI will always be a matter of faith, not proof.”

“We worry too much about machine intelligence and not enough about self-reproduction, communication, and control.”

5. Daniel C. Dennett: What Can We Do?

Daniel C. Dennett is University Professor and Austin B. Fletcher Professor of Philosophy and co-director of the Center for Cognitive Studies at Tufts University. He is the author of a dozen books, including Consciousness Explained and, most recently, From Bacteria to Bach and Back: The Evolution of Minds.

(quoting Weiner) “[I]n the long run, there is no distinction between arming ourselves and arming our enemies.” The information age is also the dysinformation age”

[W]e’re making tools, not colleagues, and the great danger is not appreciating the difference, which we should strive to accentuate, marking and defending it with political and legal innovations.”

AI in its current manifestations is parasitic on human intelligence. It quite indiscriminately gorges on whatever has been produced by human creators and extracts the patterns to be found there—including some of our most pernicious habits.* These machines do not (yet) have the goals or strategies or capacities for self-criticism and innovation to permit them to transcend their databases by reflectively thinking about their own thinking and their own goals.”

We don’t need artificial conscious agents. There is a surfeit of natural conscious agents, enough to handle whatever tasks should be reserved for such special and privileged entities. We need intelligent tools. Tools do not have rights, and should not have feelings that could be hurt, or be able to respond with resentment to “abuses” rained on them by inept users.”

6. Rodney Brooks: The Inhuman Mess Our Machines Have Gotten Us Into

Rodney Brooks is a computer scientist; Panasonic Professor of Robotics, emeritus, MIT; former director of the MIT Artificial Intelligence Laboratory and the MIT Computer Science & Artificial Intelligence Laboratory (CSAIL). He is the author of Flesh and Machines.

(John Brockman) “[H]e is alarmed by the extent to which we have come to rely on pervasive systems that are not just exploitative but also vulnerable, as a result of the too-rapid development of software engineering—an advance that seems to have outstripped the imposition of reliably effective safeguards.”

“We rely on computers for our banking, our payment of bills, our retirement accounts, our mortgages, our purchasing of goods and services—these, too, are all vulnerable.”

“Humankind has gotten itself into a fine pickle: We are being exploited by companies that paradoxically deliver services we crave, and at the same time our lives depend on many software-enabled systems that are open to attack.”

“Moral leadership is the first and biggest challenge.”

7. Frank Wilczek: The Unity of Intelligence

Frank Wilczek is Herman Feshbach Professor of Physics at MIT, recipient of the 2004 Nobel Prize in physics, and the author of A Beautiful Question: Finding Nature’s Deep Design.

In asking if AI can be conscious, creative, and/or evil, Wilczek answers yes. “Evidence from those fields makes it overwhelmingly likely that there is no sharp divide between natural and artificial intelligence.”

Talking about the ‘Astonishing Hypothesis’ that mind emerges from matter. “People try to understand how minds work by understanding how brains function; and they try to understand how brains function by studying how information is encoded in electrical and chemical signals, transformed by physical processes, and used to control behavior.”

No one has ever stumbled upon a power of mind that is separate from conventional physical events in biological organisms.”

“… natural intelligence is a special case of artificial intelligence.” He calls it the ‘astonishing corollary’.

” Human mind emerges from matter. Matter is what physics says it is. Therefore, the human mind emerges from physical processes we understand and can reproduce artificially. Therefore, natural intelligence is a special case of artificial intelligence.”

We have been upgrading/enhancing our intelligence for thousands of years. First with fire, glasses, clothing. Now with phones, internet, X-ray. All these enhancements can be covered in six factors: speed, size, stability, duty cycle, modularity, quantum readiness.

Human brains are still better than machines at: three-dimentionality, self-repair, connectivity, development, integration.

“If that’s right, we can look forward to several generations during which humans, empowered and augmented by smart devices, coexist with increasingly capable autonomous AIs.”

8. Max Tegmark: Let’s Aspire to More Than Making Ourselves Obsolete

Max Tegmark is an MIT physicist and AI researcher, president of the Future of Life Institute, scientific director of the Foundational Questions Institute, and the author of Our Mathematical Universe and Life 3.0: Being Human in the Age of Artificial Intelligence.

Consciousness is the cosmic awakening; it transformed our Universe from a mindless zombie with no self-awareness into a living ecosystem harboring self-reflection, beauty, hope, meaning, and purpose.”

“But from my perspective as a physicist, intelligence is simply a certain kind of information processing performed by elementary particles moving around, and there’s no law of physics that says one can’t build machines more intelligent in every way than we are, and able to seed cosmic life.”

Tegmark argues that we’ve been outsourcing/inventing our ways out of 1) natural processes (heat/light/mechanical power), 2) then discovered that our bodies are also (biological) machines, and 3) started building machines that outshine us in cognitive tasks too.

“The existence of affordable AGI means, by definition, that all jobs can be done more cheaply by machines, so anyone claiming that “people will always find new well-paying jobs” is in effect claiming that AI researchers will fail to build AGI.”

Homo sapiens is by nature curious, which will motivate the scientific quest for understanding intelligence and developing AGI even without economic incentives.”

“I’m advocating a strategy change from “Let’s rush to build technology that makes us obsolete—what could possibly go wrong?” to “Let’s envision an inspiring future and steer toward it.””

  1. An arms race in lethal autonomous weapons should be avoided.
  2. The economic prosperity created by AI should be shared broadly, to benefit all of humanity.
  3. Investments in AI should be accompanied by funding for research on ensuring its beneficial use. . . . How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?

“[T]he real risk with AGI isn’t malice but competence.”

This mistakenly equates intelligence with morality. Intelligence isn’t good or evil but morally neutral. It’s simply an ability to accomplish complex goals, good or bad.”

“Let’s create our own meaning, based on something more profound than having jobs. AGI can enable us to finally become the masters of our own destiny. Let’s make that destiny a truly inspiring one!”

9. Jaan Tallinn: Dissident Messages

Jaan Tallinn, a computer programmer, theoretical physicist, and investor, is a co-developer of Skype and Kazaa. In 2012, he co-founded the Centre for the Study of Existential Risk—an interdisciplinary research institute that works to mitigate risks “associated with emerging technologies and human activity”

“As predicted by Turing, once we have superhuman AI (“the machine thinking method”), the human-brain regime will end. Look around you—you’re witnessing the final decades of a hundred-thousand-year regime.”

Another strong incentive to turn a blind eye to the AI risk is the (very human) curiosity that knows no bounds. “When you see something that is technically sweet, you go ahead and do it and you argue about what to do about it only after you have had your technical success.”

(quoting Yudkowsky, blog) “[A]sking about the effect of machine superintelligence on the conventional human labor market is like asking how US-Chinese trade patterns would be affected by the Moon crashing into the Earth. There would indeed be effects, but you’d be missing the point.”

“… superintelligent AI is an environmental risk

Tallinn argues that we pity humans fit nicely within the nice confines of Earth (although we have shaped it to our liking, think airconditioning). But that AI is very much able to survive in a much wider range of environments (e.g. deep space).

10. Steven Pinker: Tech Prophecy and the Underappreciated Causal Power of Ideas

Steven Pinker, a Johnstone Family Professor in the Department of Psychology at Harvard University, is an experimental psychologist who conducts research in visual cognition, psycholinguistics, and social relations. He is the author of eleven books, including The Blank Slate, The Better Angels of Our Nature, and, most recently, Enlightenment Now: The Case for Reason, Science, Humanism, and Progress.

A healthy society—one that gives its members the means to pursue life in defiance of entropy—allows information sensed and contributed by its members to feed back and affect how the society is governed. A dysfunctional society invokes dogma and authority to impose control from the top down.”

“The possibility that machines threaten a new fascism must be weighed against the vigor of the liberal ideas, institutions, and norms… The flaw in today’s dystopian prophecies is that they disregard the existence of these norms and institutions, or drastically underestimate their causal potency.”

“The reason is that almost all the variation across time and space in freedom of thought is driven by differences in norms and institutions and almost none of it by differences in technology.”

What I get from this is that technology is agnostic and that how we use it (norms/culture) will determine if it will be used for good or bad. Pinker argues that we/activists should focus on things like the laws, not the technology.

Pinker also dismisses the competent-but-stupid AI scenarios where the AI is very good at completing a goal, but doing this too literal (e.g. making everyone happy by installing dopamine drips). He argues that intelligence (as a broad concept) will consist of several parts that ‘grow’ together, and thusly an AI that will be able to do large things in the world, will also be one that is ‘smart’ enough to not ‘hack’ the goal. (I’m not totally sure about this line of argument and I think especially Nick Bostrom wouldn’t agree).

Rates of industrial, domestic, and transportation fatalities have fallen by more than 95 (and often 99) percent since their highs in the first half of the 20th century.* Yet tech prophets of malevolent or oblivious artificial intelligence write as if this momentous transformation never happened and one morning engineers will hand total control of the physical world to untested machines, heedless of the human consequences.”

11. David Deutsch: Beyond Reward and Punishment

David Deutsch is a quantum physicist and a member of the Centre for Quantum Computation at the Clarendon Laboratory, Oxford University. He is the author of The Fabric of Reality and The Beginning of Infinity.

(about humans in the past) “Moreover, this must have been knowledge in the sense of understanding, because it is impossible to imitate novel complex behaviors like those without understanding what the component behaviors are for.”

“Such knowledgeable imitation depends on successfully guessing explanations, whether verbal or not, of what the other person is trying to achieve and how each of his actions contributes to that—for instance, when he cuts a groove in some wood, gathers dry kindling to put in it, and so on.”

“No nonhuman ape today has this ability to imitate novel complex behaviors. Nor does any present-day artificial intelligence. But our pre-sapiens ancestors did”

“Any ability based on guessing must include means of correcting one’s guesses, since most guesses will be wrong at first. (There are always many more ways of being wrong than right.) Bayesian updating is inadequate, because it cannot generate novel guesses about the purpose of an action, only fine-tune—or, at best, choose among—existing ones. Creativity is needed. As the philosopher Karl Popper explained, creative criticism, interleaved with creative conjecture, is how humans learn one another’s behaviors, including language, and extract meaning from one another’s utterances”

“So everyone had the same aspiration in life: to avoid the punishments and get the rewards. In a typical generation, no one invented anything, because no one aspired to anything new, because everyone had already despaired of improvement being possible.” (more in From Bacteria to Bach and Back, Daniel Dennett)

The worry that AGIs are uniquely dangerous because they could run on ever better hardware is a fallacy, since human thought will be accelerated by the same technology.” (very much opposing many others who see AI as dangerous, although in many cases they are talking about two different things, Deutsch is talking specifically about creative AGI)

12. Tom Griffiths: The Artificial Use of Human Beings

Tom Griffiths is Henry R. Luce Professor of Information, Technology, Consciousness, and Culture at Princeton University. He is co-author (with Brian Christian) of Algorithms to Live By.

“But if you want to know why the driver in front of you cut you off, why people vote against their interests, or what birthday present you should get for your partner, you’re still better off asking a human than a machine. Solving those problems requires building models of human minds that can be implemented inside a computer—something that’s essential not just to better integrate machines into human societies but to make sure that human societies can continue to exist.”

Making inferences can be very difficult. If you prefer dessert, will your AI now buy you only desserts? Knowing what humans want (in how far we really know it ourselves) will be a very big challenge.

“One of the tools used for solving this problem is inverse-reinforcement learning. Reinforcement learning is a standard method for training intelligent machines. By associating particular outcomes with rewards, a machine-learning system can be trained to follow strategies that produce those outcomes.”

“If you’re trying to make inferences about the rewards that motivate human behavior, the generative model is really a theory of how people behave—how human minds work. Inferences about the hidden causes behind the behavior of other people reflect a sophisticated model of human nature that we all carry around in our heads. When that model is accurate, we make good inferences.”

“[W]hen it comes to understanding the human mind, these two goals—accuracy and generalizability—have long been at odds with each other. … Ultimately, what we need is a way to describe how human minds work that has the generalizability of rationality and the accuracy of heuristics.”

“To develop a more realistic model of rational behavior, we need to take into account the cost of computation. Real agents need to modulate the amount of time they spend thinking by the effect the extra thought has on the results of a decision.” The model used for this is called ‘bounded-rationality’.

Human beings are an amazing example of systems that act intelligently despite significant computational constraints. We’re quite good at developing strategies that allow us to solve problems pretty well without working too hard. Understanding how we do this will be a step toward making computers work smarter, not harder.”

13. Anca Dragan: Putting the Human into the AI Equation

Anca Dragan is an assistant professor in the Department of Electrical Engineering and Computer Sciences at UC Berkeley. She co-founded and serves on the steering committee for the Berkeley AI Research (BAIR) Lab and is a co-principal investigator in Berkeley’s Center for Human-Compatible AI.

“At the core of artificial intelligence is our mathematical definition of what an AI agent (a robot) is. When we define a robot, we define states, actions, and rewards.” The goal of an AI is to get the highest cumulative reward.

We have been doing quite well with this definition. “But with increasing AI capability, the problems we want to tackle don’t fit neatly into this framework. We can no longer cut off a tiny piece of the world, put it in a box, and give it to a robot.

“So to anticipate human actions, robots need to start understanding human decision making. And that doesn’t mean assuming that human behavior is perfectly optimal; that might be enough for a chess- or Go-playing robot, but in the real world, people’s decisions are less predictable than the optimal move in a board game.” Here I think she is referencing (implicitly) to Judea Pearl and David Deutsch, who argue that this ‘understanding/predicting’ is now not available/possible in current AI systems.

“Finally, just as robots need to anticipate what people will do next, people need to do the same with robots. This is why transparency is important. Not only will robots need good mental models of people but people will need good mental models of robots.

“In general, humans have had a notoriously difficult time specifying exactly what they want, as exemplified by all those genie legends. An AI paradigm in which robots get some externally specified reward fails when that reward is not perfectly well thought out. It may incentivize the robot to behave in the wrong way and even resist our attempts to correct its behavior, as that would lead to a lower specified reward.”

What Anca argues for is that we should have AI that reasons about us. I think this is the right solution, but also the most difficult one. We are bad at it, there are different reasons/preferences between people. It will be a tough cookie to crack.

14. Chris Anderson: Gradient Descent

Chris Anderson is an entrepreneur; former editor-in-chief of Wired; co-founder and CEO of 3DR; and author of The Long Tail, Free (book), and Makers.

Chris’ story starts with one about mosquito’s that follow a gradient descent when they are searching for you. The stronger the smell, move in that direction (an algorithm). He argues that almost everything around us is driven by gradient descent (hunger, sleepiness, etc).

He talks more about local minima (finding a solution, but that a better one might be over the next ‘hill’). One thing you would probably need is a (mental) map.

“We’re going to rock ourselves out of local minima and find deeper minima, maybe even global minima. And when we’re done, we may even have taught machines to seem as smart as a mosquito, forever descending the cosmic gradients to an ultimate goal, whatever that may be.”

15. David Kaiser: “Information” for Wiener, for Shannon, and for Us

David Kaiser is Germeshausen Professor of the History of Science and professor of physics at MIT, and head of its Program in Science, Technology and Society. He is the author of How the Hippies Saved Physics: Science, Counterculture, and the Quantum Revival and American Physics and the Cold War Bubble (forthcoming).

“Wiener borrowed this insight when composing Human Use. If information was like entropy, then it could not be conserved—or contained.” One key idea here is that if something is known somewhere, you can’t stop others from learning it (only possibly delay it a bit). “(from Weiner) [T]he fate of information in the typically American world is to become something which can be bought or sold.”

(hmm I guess I didn’t find too many nuggets of information (a term there defined in a few ways) in this piece).

16. Neil Gershenfeld: Scaling

A

17. W. Daniel Hillis: The First Machine Intelligences

A

18. Venki Ramakrishnan: Will Computers Become Our Overlords?

A

19. Alex “Sandy” Pentland: The Human Strategy

A

20. Hans Ulrich Obrist: Making the Invisible Visible: Art Meets AI

A

21. Alison Gopnik: AIs Versus Four-Year-Olds

A

22. Peter Galison: Algorists Dream of Objectivity

A

23. George M. Church: The Rights of Machines

A

24. Caroline A. Jones: The Artistic Use of Cybernetic Beings

A

25. Stephen Wolfram: Artificial Intelligence and the Future of Civilization

A

And from one very good review I’m going to copy another view:

Nobody knows
It has been proven nigh impossible to predict where the scientific progress or humanity was headed even when developments – of any sort – were stable. The exercise is more futile now given the pace of change with new technologies. With rising complexities rise the future potentialities. Almost anything, everything, and nothing predicted and predicated is possible sometime or the other in the next century. The wide variety of views present in the book have the brightest minds talking past each other partly because the history and experience they cite are useless in providing projections of what could lay ahead. Differing meanings of the terms used as explained in a couple of sections below also contribute to the extent of disagreements, as is commonplace amongst philosophers of any ilk for millennia. 

Machines will surpass humanity
Most of the contributors seem to agree that there will be hardly any human skills and faculties where our technology creations will remain inferior forever. None of the contributors resorts to discussions on the soul or divine entity to justify our perpetual supremacy. Our ability to sense causality, impute a purpose and our apparent consciousness are seen by a handful that will keep humanity ahead, but none of these commentators expects any humanity trait supremacy to last forever. Let’s use a bad analogy: if our natural procreation, our children, grow up to develop own purpose, outgrow their parents in many skills and at times develop the willingness to act against their creators, will machines surely never go along that path? The point behind this lousy analogy is that our silicon creations will grow exponentially forever for decades and centuries if not millennia to come. A few years ago, the pessimists used to cite their inability to recognize a cat or a face through optical sensors as one reason humans would remain superior for a long time. Machines have surpassed humans on sound and face recognition in a few short years. They may walk or run better than us next even if Robots’ plodding appear clunky and laughable today (note: Boston Dynamics, already doing very well), for instance. And, machines as a unit may also learn to ascribe purposes too or exhibit complexities that make our consciousness look like that of a cardboard box in a few decades. It is difficult to pick a set of human aspects that will remain superior for the next hundred years. 

Subpoint: Machines will have its own goals and purposes 
It is likely that consciousness is nothing but an emergent quality from many neurons interacting with each other just the way fluidity is from water molecules or planetary forces are from rocks coming together. What we humans imply through words like beauty, art, goals, purpose are possible emergent qualities from the numerosity of the underlying components and their complex interactions or interrelations. If today’s machines can only code, crunch data or uncover hidden patterns but cannot define their own “ultimate” utility functions, the “ultimate” stage set by humans is being pushed back with the machines working out the rest on their own. It is not ridiculous to assume that what we deem as exotic human qualia – goals, consciousness, beauty, etc. – will also fall prey to the ever-growing machine abilities if they prove nothing but emergent qualities of complex computational techniques.

The pessimistic forecasts are far more compulsive reads
There is no reason AI/AGI/technology progress should make humanity useless, subservient, or extinct for centuries, even if it is a long-term inevitability. As we discuss above, no-one knows! That said, the cases of the optimists – i.e., those who mostly believe that the positives of technology boom would far outweigh any attendant harmful impacts – appear lame compared to the pessimists. Once again, the optimists do not have to be wrong, but the stage belongs to those with scary stories. In the 25 views, you read, the most frightening are by far the most compelling. The trend tells us about what gets our goat and stirs us to action. That said, the pessimists appear more right because almost all optimists’ cases base their case off the dire forecasts that did not prove right historically rather than paint any upcoming utopia they have in their mind.  The optimists rest their case on grandmotherly adages like this time is not different while pessimists point to horses who thought they would continue to carry humans forever in transport based on a few thousand years of history but became the showcase items. (mandatory CGP Grey video)

Terms without precise meanings and predictions that are too static
With the band of new philosophers and heavy thinkers in this compendium, there are dozens of commonly used terms including AI, AGI, co-existence, etc. with no precise meanings or with multiple meanings. AI appears to be perpetually something that is a technology of tomorrow, never mind that what we have today would have likely surpassed any definition of most scientists’ AI a few decades ago. The way we use our smart devices, even a person in the late nineties would claim we already co-exist with our gadgets now. The field does not need its Wittgenstein to prove how these thinkers are talking different languages; the technology world is moving far too quickly for the best thinkers to take decades to agree on the underlying meaning of terms. Readers have to distil the views themselves, keeping in mind the plethora of different meanings and time-frames used by the writers while talking about the same subject. (I find this a very good point, we are already living with AI in many forms, and AGI is, I believe, not something that will happen at moment X, it will be different skills/intelligences at different moments) (It also makes me think of a flood raising higher and higher, and some skills are higher upon some mountains)

Multiple dystopias
This reviewer can categorize the doomsayers in at least three different buckets: 
a. what will we do? If machines do everything better, will humans be next dogs, better sitting pretty at home than trying to work on anything? If that’s the case, how will the machines/rest of the working world bear the burden of a rising hoard of the unemployed? How will this unemployed lot live life or find purpose? 
b. Will we have any free will? As machines understand us faster and better than we can, and continuously act to change our behaviour, will we have any power to stand up against the big brother – be it a set of corporates, governments or machines – converting us into its zombies? Will we be just like our stone-aged forefathers or animals with what were unfathomable massive natural forces for them become machine controls for us?
c. if machines/AGI change the world to make it more suitable for their existence, will humans go extinct? Will machines feel the need to euthanize our race for their purpose someday?
With the rising concerns or privacy and security, most contributors’ AGI dystopia worries focus more around the second category concerns at present. If economic cycles turn, the first category pessimists may get more hearing, even though they are the ones most laughed off based on historical antecedents now. The third category doomsayers will carry the sensationalist tag until it becomes too late assuming that day is in our future at some point.

View 1: think tanks will not work
Let’s say that humanity’s primary goal against AI is a guaranteed survival and continued dominance. We want at least some of us to remain the ultimate overlord of this planet. This requires suppression of some AI-developments or at least a close monitoring. Many groups have been formed globally with the right objectives in mind, but such think tanks are slow moving entities with little power to make an immediate difference. It is likely that by the time some of their suggestions are enacted, the AI-world might have already skirted the underlying issues with many more of different varieties turning more critical. These groups are playing an important role in highlighting the problems at hand unbiasedly, but they are unlikely to make a real difference on their own.

View 2: the best solution could be fighting of iron with iron!
In a free-wheeling technocratic world, the best solutions will emerge from competing entities. It is likely that despite the cries from those with extreme views, no “kill switch” will rise into existence for any AI at humanity level. The more “the good” who follow laws are suppressed at some place, the more will be the powers of some “bad” at some other place. This topic is controversial and requires an extended essay on its own, so perhaps not for this review!

Public Commitment 2019 – Update 2

This year my theme is Connection. Just like in the second half of 2018, I’ve been keeping up my updates on the Timeline.

I haven’t thought too much about the theme itself for the last three months. I have been thinking about what to do in the future, and in a way still want to bring together the concepts I’ve learned across a wide range of topics.

Spero is one of the projects where I want to apply this. First in writing with some AI information, some storytelling experience, and then some more to learn about marketing it.

Here is my analysis of the goals and various updates:

Goal 1: Make this website a true personal knowledge hub

This quarter I’ve improved search on the Timeline. Next to that, I’ve been cutting down a bit on listening to podcasts. I found it too much, non-just-in-time, information. Now I try and listen to more books, more music, and more time without anything blaring into my ears.

The next quarter I will hope to find some time for essays. And in some free moments, I would also like to improve the structure a small bit, but this is a very low priority.

Goal 2: Eat good meals that support my well-being 90% of the time

Food has been going very well. Together with Lotte, I’ve been eating very healthy meals. We don’t snack a lot and I keep the alcoholic drinks to 0-2 on weekdays.

I’m currently experimenting with intermittent fasting (IF), eating only between 11-19. It has been going very well and I’ve adjusted to not eating in the morning.

The trouble/difficulty is sports. I normally go to the gym in the morning, but I feel less energetic without some sugars running through my veins. So I’m testing to find out the best time to do sports. One good time (that I’m testing now) could be 12-14.

Next to that, I would like to know some more about the effects of some foods and have more structure/standard meals per week.

Goal 3: Keep on improving my house

The last quarter we’ve done quite some improvements (and bringing two households together). And I’ve made a bar for the balcony. There is a leak somewhere in the bathroom/pipes, so that is something the installer will look at this Friday.

But next to that, I have no immediate plans to do something for the house. I will maybe take the lead in double-glass (at the front), but we will see about that in a bit.

Goal 4: Achieve my fitness goals

The last few weeks I’ve been cutting down on calories and it’s going pretty well. I will see how long I can hold this pattern, and then move to a 13-week strength cycle (with enough food to help the muscles grow).

My maximum for the Snatch is now at about 50kg, but my max of 6 reps (x4 sets) is at around 40kg, so that is of course way too close. I will try and see which things I can focus on to become better under pressure/weight.

Goal 5: Write Spero

In contrast to the last quarter, I’ve written quite some parts of Spero and will continue to do so for the next quarter. My goal is to have a first draft finished by then, and to maybe also have some friends critique it by then.

That is it for now. Next quarter I hope to share some good updates again.

Bird by Bird

Bird by Bird written by Anne Lamott is an awesome guide to writing well. It doesn’t only touch on how to write (it doesn’t even really go into the details like style) but focusses much of the energy on the how of writing.

Write with emotion, use your own life (but do change the details), and write shitty first drafts. Improve, repeat.

Writing isn’t about becoming famous and to really enjoy it, try and enjoy the process itself.

From another review: “Bird by Bird is more of a pep talk/psychotherapy session for writers. Sitting home alone writing can be more than a little crazy-making, so it’s nice to have some reassurance that the craziness is normal, along with some tools for getting to the next day. ” I agree.

Good in combination with Writing That Works.

Loonshots

In Loonshots we’re taken on an innovative journey by Safi Bahcall. In the book, he argues how we can stimulate loonshots (moonshots, innovative new products, high chance of failure) on a personal and company level. He talks about the ingredients for loonshots, uses many examples (which can be a bit too detailed), and makes you want to start your own loonshot factory.

From my (read: Queal’s) perspective, I have mixed feelings about the need and viability of loonshots. What if innovation and progress are made in small steps? Bahcall does mention this in the book and I think he is also on board with this concept. He calls them franchises (making the next iteration/update of an original product). If the distinction (1 or 0) is a bit artificial, I will leave in the middle. Let’s just say the book focuses on one end, the loonshots.

Here are some ingredients/concepts from the book:

  • False Fail: When a valid hypothesis yields a negative result in an experiment because of a flaw in the design of the experiment.
    • The example here was that statins didn’t go through the phase where they tested on rats, it turned out they were a bad analogy for a human body and only through someone trying it out again on chickens (and 3 more failures, something he mentions many times), did it move forward.
    • Another one I heard before, was the mention of social networks and why some people did invest in Facebook, they saw that the failure was in how earlier social networks executed their strategy, not in that the idea was bad. (see more here)
  • Phases of organisation: When an organisation is considered as a complex system, we can expect that system to exhibit phases and phase transitions — for instance, between a phase that encourages a focus on loonshots and a phase that encourages a focus on careers.
    • Here there is also a large section (to the end) on how companies can encourage loonshots, they mostly focus on preventing incentives (behavioural economics) for making a career.
    • One example is DARPA where someone works on a project (no chance to move up) for some years before rotating out again.
  • Separate artists and soldiers: Make sure that the people working on the loonshots (artists) don’t need to fulfil the same metrics as the people bringing in the immediate profit (soldiers).
    • Do love them equally (example used was how Steve Jobs only focussed on the artists)

This being said, a very interesting book, but maybe not very much applicable for myself (at this moment).

Protected: Short Stories

This content is password protected. To view it please enter your password below:

The Book of Why

The Book of Why: The New Science of Cause and Effect by Judea Pearl and Dana Mackenzie argues that correlation is not causation.

Three levels of intelligence:

  1. Seeing – regularities (animals, evolution?)
  2. Doing – predict (e.g. hawk and prey)
  3. Imagining – theory, why! (humans, sometimes)

Current day AI is at level 1 (not even 2). It can’t posit a counter-factual. And it shows that level 1 is already very strong.

Limited Turing test is the goal of Pearl. E.g. have a story and AI to make inferences from parts of it.

Bayesian, priors, causality, billjard balls

http://ftp.cs.ucla.edu/pub/stat_ser/r481.pdf

The Longevity Diet

Dr Valter Longo summarises his life long journey of researching longevity through diet. In The Longevity Diet, he argues for a nutritious diet in combination with regular fasting-mimicking diets (FMD). The diet is plant-based (with some fish sprinkled in). The FMD should activate innate programs your body has for restoring youth (juvenescence).

The book offers a compelling argument for the influence of diet on our health. It also makes common sense (which from The AI Delusion I gather we need some more of). Yet it also relies heavily on epidemiological data and studies of centenarians. What I find most compelling is the clinical studies, for which the other two can be a basis/hypothesis.

Two questions remain after reading the book. The first concerns longevity and fitness. In bodybuilding/weightlifting/etc world IGF-1 is touted as a great way to build muscle. Yet it’s also one of the things mentioned in the longevity diet as something to avoid (e.g. red meat) and lower (e.g. FMD). I want to put on some pounds (of muscle) in the coming years, yet also want to live long. So there is a bit of a dilemma.

The second is about the expected effect size of the longevity diet (+FMD). Will it add 5 years? 10 years? And/or how many healthy years (healthspan) will it add? This is something that is quite difficult to study (us being humans and all), and I hope we will be able to make progress in this area in the coming years. At the same time I also think that fixing things at a molecular level (see Ending Aging) should be pursued.

The best thing could be to eat healthy, with some FMD/fasts sprinkled throughout the year, and then also start fixing some things which we can’t keep intact with a good diet, or that need to be partly supplemented with other interventions.

One thing to never lose sight of is the enjoyment of life. Some of the mice in the calorie restriction programs were depressed, for twice the lifetime. I really like the idea of having short (5 days) fasts/FMD 4 times per year. And although I already follow most of the guidelines of the longevity diet (and I want to do that even better), I still love to have a beer or two (or 8) every now and then. So if you want to have a long and healthy life, read on for the rest of my notes on The Longevity Diet.

In the introduction we are introduced to the goal of the book “Contrary to the notion that if we live longer we will extend the ‘sickness’ period, our data indicate that by understanding how the human body is maintained while young, we can stay fully functional into our nineties, hundreds, and beyond. One of your primary ways to achieve this is to exploit our body’s innate ability to regenerate itself at the cellular and organ levels.”

The Five Pillars of Longevity is what Longo builds his research upon:

  1. Basic/juventology research
  2. Epidemiology
  3. Clinical studies
  4. Studies of centenarians
  5. The understanding of complex systems

A lot of the book is dedicated to describing the habits and diets of centenarians. One of the statements in this context is that supplementation doesn’t work (e.g. with antioxidants). The argument is that you can’t improve on an almost perfect system. I understand the concept in relation to what is being tried. Yet at the same time, our system is almost perfect to keep us alive some time after childbirth. In time I do believe we will be able to copy/supplement this system to live even longer.

Another thing he argues for is that we should do things in tune with evolution. Although that phrase itself is quite pointless, the example of fasting is illustrating. Longo argues that in times of low food a species (humans, but also earlier on the evolutionary tree) were right in saving themselves over reproducing (otherwise they wouldn’t be here anymore). So if we trick our body into giving that response, maybe we can set off the same protection and damage repair (on a protein level) process.

What Longo tries to do is to keep a human young, not treating individual diseases or conditions. The process of repair that he argues for, he calls programmed longevity: “a biological strategy to influence longevity and health through cellular protection and regeneration to stay younger longer.”

You are what you eat and food can have a large impact on your health. Yet at the same time Longo argues that your happiness is not determined by the food you eat. Yes a cake (read: sugars) bring you immediate joy, but eating healthy is not something that will make you unhappy. He even argues that it will make you happier, although indirectly, because of better health.

Constant caloric deprivation/deficit is not what you would want. It can expand the life of a mouse, and possibly humans too. But experiments in both show that it’s the opposite of a mood booster. This is one of the main reasons for doing a FMD/fast only at certain intervals.

The Longevity Diet is what you want to be doing for most of the time, it consists of the following parts:

  • Pescetarian diet: Almost 100% plant-based and fish 2 or 3 times a week (for the omega’s – this could be vital, but I wonder how critical/necessary it is)
    • One note is that in old age, Longo argues for more protein in the diet (but it’s based on studies of centenarians and one fallacy of this could be that they didn’t have much protein back in the day and now just have it available (thus eating it more))
  • Consume low but sufficient proteins
    • From plants and nuts
  • Minimize bad (trans) fats and sugars, and maximize good fats and complex carbs
  • Be nourished (Longo argues for taking a vitamin and mineral supplement, I can’t find good evidence that backs it up, and I wonder if it really is beneficial)
  • Eat a variety of foods from your ancestry (again, not 100% convinced, I do get it from a ‘processed’ vs more whole-food approach)
  • Eat twice a day plus a snack (now doing this, and the main benefit is more control/easier measuring of meals)
  • Observe time-restricted eating (eat within 11-12 hours per day or less)
    • The source linked for this is quite a good one, with an intervention (only to eat within the time frame) that lead to weight loss.
  • Practice period prolonged fasting (fasting/FMD for 2 periods of 5 days or more)
  • Follow the above points to reach/maintain a healthy weight and abdominal circumference

I really like the last part, it’s about finding a new diet, not ‘dieting’.

It also triggered me to make some measurements to better assess my progress in body composition (the two scales I have at home are very erratic and give different measures).

Here are some more notes:

  • The age differences between the best and the worst groups are quite small: Okinawa 81,2 vs USA 76,8 (5%)
  • And although some cancer levels are much higher in percentages, up to 8 times as many, the number of prostate cancers in the US is still only 28 per 100.000
  • “If you take 100 centenarians, you get 100 different elixirs of longevity”
  • Longo argues that the drugs we’re developing are still far away and that diet is, now, the best thing you can do yourself (I can concur)
  • Stay active, this is the second factor after diet that has a huge influence on longevity
    • Walk fast for an hour every day
    • Ride, run, swim 30-40 minutes every other day plus two hours on the weekend
    • Use your muscles
  • The FMD could also have positive effects on many diseases. Although things become a bit more speculative here, there is still quite promising evidence and if you (again) compare it to the average diet, I can very much understand how it could help
  • Chapters that follow deal with cancer, diabetes, cardiovascular disease (the biggest killer that gets the least attention), Alzheimer’s, and autoimmune diseases

The book ends with an observation about our minds. A positive mindset, a will to live, and more could be very significant factors in longevity. The trouble is that it’s not very well studied and the implementation of results can be quite hard. But keeping close friends and enjoying life should not be underestimated.

As a final note, I wholeheartedly believe in what Longo says and I am confident that more research will confirm many of the things not yet proven. Yet I also think we should pursue medical/drug interventions with all the haste we can. Eventually things will break down, our genes weren’t ‘made’ to have us live forever. So we will have to come up with ways to do this ourselves. The Longevity Diet is a great basis, a well-oiled car, now we need some mechanics to do some repairs (and upgrades) every now and then.

Note: As mentioned above, centenarian research really isn’t that good. New research shows that record-keeping in these areas is bad and the long ages probably based on lies (or statistical flukes).

The AI Delusion

In The AI Delusion, Gary Smith very successfully argues against the intelligence part of artificial intelligence (AI). With numerous examples, and sometimes a rather deep dive into statistics, he shows that current day AI is nothing more than some competence without any comprehension.

An AI system might be very good at reading stop signs, but put a sticker on the sign and you’ve lost it. Robustness and common sense are missing from AI systems. The book gave me a new insight into how ‘thinking’ machines are still a long way off. At the same time, I’m looking forward to exploring The Book of Why, one that argues that we can express causality (where now we only have correlation in AI systems) in math.

In some ways I was already on board of the AI is awesome bandwagon. I heard about Google Flu, about AI systems to give loans to people (and not being biased), and about algorithms to prevent crime. All these examples pass in the final chapter and are all shot down ruthlessly.

The main argument of Smith is that when you put together a lot of data and then let a system find correlations, it will find them. If we then don’t take a look at the black box (which correlations it used), things can get pretty weird.

Examples in the book include weather in a city in Australia, prediction (in a given year) the temperature in a city in America the next day (inversely correlated). And at many times he uses a random number generator to show that when you gather enough data/correlations you will get results.

Smith tackles industries like technical analysis (looking at stock charts and finding correlations/patterns), drug discovery, and more. With regards to stocks he mentions numerous ‘systems’ and shows that these don’t work outside of the training data and that many ‘gurus’ change their system over time (which of course is touted as evolution, but of course is just refitting the model with the new training data).

The trouble is that in many cases the results don’t translate outside the training data (where you let the AI find the correlations). This was, for instance, the case with the Google Flu system. And when you do find out what it uses, it can also be gamed (just like people do with SAT tests). One example of this is that people with Android (vs Apple) phones were worse creditors. If you (the person wanting a loan) know this, just change phones.

Yet even when you look outside the training data, you can still be lucky (or unlucky, depending on your point of view). When Smith used his random numbers and looked at the ones that predicted stock prices in one year, and then looked at all the ones that also worked the next year, some did very well. Yet that doesn’t mean it will do well again the next year (remember, random numbers).

One point he drives home is that we shouldn’t trust computers blindly. When they show competence without comprehension, you need to be the one who instils the common sense. Computers are way better (read: perfect) at remembering numbers, but they have no clue why.

The book is a bit long on the middle (Smiths background is in statistics and it shows), but also is a good wake-up that we’re not there yet. I think that with our narrow, stupid, but still very competent AI we can still do many great things. But for now, we should leave the comprehension and critical thinking to us humans.

Machines Like Me

Machines Like Me is the latest novel by Ian McEwan. Although I wasn’t really a fan of Solar I really got carried away with this book. The world is somewhat different from ours and if I remember it correctly it takes place in 1984. Some technology is more advanced than ours and McEwan ponders quite some interesting questions.

Can a machine think? Can it love? The laws of nature don’t forbid it, yet at the same time we don’t know what makes us tick / conscious. The final chapter also ponders what the rights of (humanoid) robots are, could we just kill them?

The writing of McEwan is great, and quite funny in a somewhat dark way. I got a lot of references to machine intelligence, p vs np-problem, Alan Turing, etc. The politics I was only partially aware of, but that didn’t take center stage.