Reviews of Some AI Projects

By Brian Tomasik

First written: 23 Jul 2014. Last nontrivial update: 13 Jun 2017.

Summary

This page describes a few artificial intelligence (AI) projects and gives my own commentary on them based on a cursory review. I particularly focus on ethical implications, including whether these AIs should be thought of as at least slightly sentient. This list is by no means comprehensive; it's just a collection of projects that I've learned about or played around with.

Contents

Introduction

Ordinarily I'm a big promoter of Wikipedia. I suggest that if you plan to summarize information about a topic, you should probably do so on Wikipedia rather than elsewhere, because on Wikipedia the information will be centralized in one spot rather than scattered across random pages on the web.

However, when it comes to discussing technical details of AI projects, I become worried about contributing to Wikipedia. Adding information about specific AI details seems like it might benefit people working to build AI as much as public audiences seeking to understand AI. As a result, such contributions might increase the speed of AI development at least as fast as the speed of understanding AI by civil society. Because it's plausible that we should want AI to develop more slowly, it's not clear if highly technical Wikipedia contributions on AI are good or bad on balance. They have both good and bad consequences, so I doubt such contributions have dramatic impact either way, but I'm not sure they're positive either.

As a result, because I want to write about some AI projects, I decided to do so on my own website instead. Writing here will also allow me to inject my opinions on ethics in addition to briefly summarizing the projects.

More complete lists of AGI projects

Some projects

AnimatLab

This project simulates artificial animals (animats) using biologically plausible physics, biomechanics, and neural networks. I am really impressed with how realistic and functional these simulations are. As can be seen from the source code and the sample videos on the home page, the neural networks that control animats include a level of detail down to specific electrical behavior and parameters of individual neurons. For instance, here are various C++ files for firing-rate neurons. The introductory videos show how neurons can be put together to generate sophisticated animat behaviors, which I found extremely enlightening and well worth watching.

One of the videos features simulated predation by a frog-like creature on a fly-like creature. The fly-like creature moves mostly randomly and doesn't respond aversively to being eaten from. This is good. However, it's easy enough to imagine augmenting this simulation with damage signals and escape behaviors by the fly-like creature, at which point the simulation would begin to become morally worrisome, at least to a tiny degree. Alas, animat simulations of the future will include this level of detail and beyond.

OpenCog

OpenCog is one of the more impressive AGI projects I've seen because it's meant to be a robust software product rather than just a system good enough for publishing academic papers. It's built by some real programmers using good coding practices. It's also fairly ambitious and combines together a lot of components.

Despite the seeming technical competence of the project, it has an absurdly optimistic proposed timeline for reaching AGI. I don't know if this is because

Regardless, I think Ben Goertzel and the OpenCog developers are very smart and have a solid grasp on cognitive and computer sciences.

One feature of OpenCog is OpenPsi, an implementation of Joscha Bach's MicroPsi. It models emotions in a simple way. Bach's proposal gives rise to emotions as emergent properties of lower-level component tendencies. We can compare this with other efforts to categorize emotions and show how they may emerge from smaller parts.

Project Joshua Blue

Project Joshua Blue is an IBM project, apparently from the early 2000s. Alvarado et al. (2001) report: "This project is in its beginning stages. A simple model of mind has been implemented in a limited virtual environment."

Alvarado et al. (2001), pp. 1-2:

Joshua Blue incorporates an emotional model derived from current emotion theory, and is thus superficially similar to models implemented by Breazeal and others. Like such models, our system includes valence and arousal, homeostasis, and drive states, but it also includes proprioception and a pain/pleasure system. [...]

Proprioceptors for affect were implemented to permit the system to introspect on its own global affective state, to be aware of the affect associated with a specific set of objects, and to experience pain and pleasure. This latter constitutes the reward and punishment system that guides exploratory behavior, generates expectations and ultimately motivates goal-directed behavior.

Of course, the extent to which the system actually implements something we would consider (somewhat like) pleasure and pain depends on the details of what functional roles these "experiences" play within the cognitive architecture. Alvarado et al. (2001) discuss some of these details at a superficial level.

PyBrain

Includes a number of learning algorithms written in Python—including supervised, unsupervised, and reinforcement—as well as optimization methods. Some of the algorithms are based on neural networks. The reinforcement-learning algorithms in particular may raise a tiny bit of ethical concern.

Soar

The Soar cognitive architecture includes reinforcement learning and crude emotion modeling in the form of intrinsic rewards.