This article is one of five papers on computer tools for materials to be presented exclusively on the web as part of the April 1997 JOM-e—the electronic supplement to JOM. The coverage was developed by Steven LeClair of the Materials Directorate, Wright Laboratory, Wright-Patterson Air Force Base. Please tell us know what you think by taking the survey below.
JOM-e Logo
The following article appears as part of JOM-e, 49 (4) (1997),

JOM is a publication of The Minerals, Metals & Materials Society

Applied Technology

Autonomous Ultrahard Materials Discovery via Spreadsheet-Implemented Neural Network Cascades

S.L. Thaler

Author's Note: A U.S. patent is pending on the creativity machine described in this article.

The application of random inputs to the internal architecture of a trained neural network allows us to interrogate the conceptual space contained therein. For instance, we may supply random inputs to a neural network trained to output the formulas of known chemical compounds of the form AxBy. The formulas emerging from such a network consist of not just the exemplars shown to it during training but also a broad range of plausible chemical compounds previously "unseen" by that network. We say that the network "imagines" or "invents" new materials that are beyond its experience (i.e., its training). Supervising this stream of potential chemical formulas with a second neural network, trained to recognize valuable potential chemical compounds, it is possible to capture the emerging discoveries and, thus, create libraries of totally new commercially and technologically useful materials. This paper describes the construction, function, and output of a preliminary chemical invention machine that proposes new ultrahard materials with the simple binary stoichiometry AxBy.


Typically, a trained artificial neural network (ANN) tends to interpret the introduction of some perturbation to its architecture as the application of one of its training exemplars to its inputs. 1-4 If the source of perturbation effects only the network's inputs, then the network outputs range through all possibilities that are consistent with the stored neural model. If, on the other hand, perturbations effect the connection weight values, the network outputs will begin to depart from the trained-in neural model. In either case, if one patrols the outputs of this first network with a second network that is sensitive to emerging concepts that satisfy some given search criteria, it is possible to form what has been termed a creativity machine (CM). Under this premise, the first network is called an imagination engine (IE); the second, or supervisory, network is called an alert associative center (AAC).

As an example, Figures 1-3 depict the process of novel automobile design, showing the basic CM architecture (Figure 1), the egress of the IE's output vectors from the known conceptual space of automobile training exemplars (Figure 2), and the evolution and evaluation of the emerging designs (Figure 3).

Figure 1 Figure 2 Figure 3
View Animation

Figure 1. An IE that has "seen" examples of automobile shapes through training is exposed to internal chaos (yellow stars), causing it to produce a series of output vectors that represent plausible automobile designs. The supervising AAC network translates the imagined shapes to projected performance.

View Animation

Figure 2. With increasing internal chaos, the IE's output vectors, representing candidate auto designs, become more radical. The emerging designs begin to deviate from those in the original training set, here symbolized by the conceptual universe (U). This egress from a known conceptual space may be generalized to any problem domain using a suitably trained ANN.

View Animation

Figure 3. The supervising AAC network instantaneously evaluates each design for anticipated performance characteristics, filing away only those designs meeting some predetermined performance objective. In the case of general CM design, one can use an associative network to map the conceptual outputs of an IE to some measure of merit or related property. This critic network may then capture the most desirable of the emerging concepts.

Using this fundamental discovery paradigm, it is possible to interrogate the neural network model of any conceptual space, thus facilitating the quest for new discoveries, inventions, and solutions to seemingly intractable optimization and tailoring problems. As an example of applying the CM approach, this article focuses on the construction and function of a CM oriented toward the discovery of new ultrahard compounds having the formula AxBy. Following the above-mentioned template for CM construction, the process first involves exposing a feed-forward network to numerous examples of binary compounds and then, following training, subjecting its connection weights to successively higher degrees of perturbation. Emerging from this chaotic network would be a stream of potential chemical compounds, heretofore unseen by the network, yet possessing stoichiometrically plausible proportionalities of the elements A and B. A second network, trained to map chemical compounds AxBy to hardness values, could either cumulatively track the hardest of these compounds or create a vast survey of binary ultrahard materials.

It must be noted that this simple architecture may not be the most effective design, and that more complex variations on the CM paradigm may yield improved discovery capabilities. The enhanced cascade architecture may involve any number of highly interlinked IEs and AACs with noise strategically applied to select processing elements. In short, there is no way to avoid the involved architectural experimentation necessary to establish an optimal network structure. Thus, the technique presented here is given to spur the required rapid prototyping and testing environment needed to build and evolve such autonomous discovery systems while also serving to introduce the concept of the spreadsheet-implemented CM.5


We typically think of neural network simulations as the sequential evaluation of activation states of neurons within a network, using some algorithmic language such as C or C++. Within such schemes, individual activation levels are only momentarily visible and accessible, as when the governing algorithm evaluates the sigmoidal excitation of any given neuron (Figure 4). Except for its fleeting appearance during program execution, a neuron’s excitation becomes quickly obscured by its redistribution among downstream processing elements.

Exploiting the many analogies between biological neurons and cells within a spreadsheet, we may evaluate the state of any given network processing unit by way of relative references and resident spreadsheet functions (Figure 5). By referencing the outputs of such spreadsheet neurons to the inputs of other similarly implemented neurons, it is possible to create whole networks or network cascades. Unlike the algorithmic network simulation, all neuron activations are simultaneously visible and randomly accessible within this spreadsheet implementation. More like a network of virtual, analog devices, this simulation can be considered to be a distributed algorithm, with all neuron activations updated with each wave of spreadsheet renewal.

void feedforward(float *input, float *output)
for(j=0; j<nodes[lay]; ++j)
Figure 5
Figure 4. Neuron activation within a given network layer is evaluated in C code. Figure 5. A neuron implemented in a Microsoft Excel spreadsheet.

As a further benefit of the spreadsheet implementation, the user has a convenient graphical interface for constructing and experimenting with ANNs. For instance, one need only build a template neuron once, using simple copy and paste commands to build and connect whole networks from a prototypical processing unit. These procedures can be repeated on larger scales to move networks into position and link them into larger cascade structures. The resulting compound neural networks are transparent in operation and easily accessible for modification and repair. Furthermore, the approach lends itself well to the rapid prototyping of neural architectures when faced with multiple alternative neural circuitries.

In contrast to existing neural network toolboxes that allow for the cascading of multiple neural networks and generally represent a graphical interface to some underlying algorithmic source code (i.e., a DLL), the processing in the new ANN takes place strictly within the confines of the spreadsheet environment, with all processing units constantly accessible for various operator interactions and modifications. Thus, it is possible to readily involve various hidden-layer neuron interactions within the cascade function, to add various functional perturbations to select network weights, and to add recurrencies within any processing unit or groups of neurons. Furthermore, within sophisticated spreadsheet applications (e.g., Microsoft Excel) it is possible to enlist various resident functions and diagnostics, such as dependency traces among network cells and real-time plotting of network activation levels.

Because CMs require at least two ANNs linked within a cascade structure (Figure 1, for example), the spreadsheet implementation is the most natural construction technique. The IE and AAC inputs may readily be connected by relative cell references. Further, various forms of functional perturbations may be added to the constant connection weight values to optimize the generation rate of useful concepts. At all stages of construction, any resident Excel facilities may be enlisted to plot the behavior of any neuron or neuronal cluster or to perform various functional traces throughout the connectionist structure.

Excel’s resident macro utility, Visual Basic for Applications (VBA), typically drives the spreadsheet-implemented CM. Its chief functions are to administer random perturbations to individual neurons or connection weights within the IE, to enable any recurrencies within the CM architecture (i.e., Excel does not allow for self-referent loops), and to perform any run-time diagnostics of the machine.

Figure 6
Figure 6. The Excel interface built for this study. Buttons to left activate various cascade functions; the colorized matrix to the right intuitively displays locations of ultrahard discoveries of interest.
Perhaps the spreadsheet implementation of the CM has its greatest advantage when the necessary cascade structure consists of many highly interconnected networks. For instance, various juxtapositional invention schemes require the use of several IEs running in parallel and patrolled by a single AAC.6 Alternatively, a number of independent AACs, each alert to the separate judging criteria of a single IE’s output, may be simultaneously active. Here, the ability to rapidly interconnect these various modular networks by relative cell reference tremendously expedites overall CM construction and debugging.

Excel, together with its VBA macros, allows the creation of very striking and easy-to-use interfaces. Figure 6, for example, presents the spreadsheet interface used in this materials study. Noise levels may be set in the upper left panel prior to any run. The CM runs may be initiated by the "generate" button. The hardness survey is then cumulatively displayed in the 100 x 100 colorized matrix to right, where anticipated Knoop hardness is predicted as a function of the constituent elements A and B. A similar matrix, included within the display, shows the anticipated stoichiometric ratios of A and B, in terms of an x/y ratio.


As a reduction to practice of the spreadsheet-implemented CM, the problem of rapidly predicting binary compounds with potentially ultrahard crystalline phases has been explored. Rather than build an IE consisting of a single ANN, I constructed a cascade in which candidate stoichiometries are ‘imagined’ in a multilevel process (Figure 7). This final design choice reflected lessons learned in prior attempts at building this particular CM. The network architecture utilizes a compound IE, consisting of four distinct networks, cooperatively producing candidate formulas AxBy. These formulas are, in turn, handed to a final network that maps each formula to a projected Knoop hardness value.

Figure 7
View Animation

Figure 7. An illustration of the generalized cascade feed-forward process (depicted for iron and oxygen):
  1. Random inputs are applied to networks 1 and 2, generating electron shell configurations for elements A and B, respectively. (Step 1: Iron and oxygen electron configurations are generated.)
  2. After applying a combination of the elements (A and B, determined in step 1), network 3 generates a recommended stoichiometry: x,y. As hardness values are calculated for each imagined compound, they are recorded within two 100 x 100 matrices—one yielding atomic composition and and one providing the ratios of x/y for the hardest stoichiometry found. (Step 2: Fe2O3 is recommended.)
  3. The candidate compound is applied to network 4, where small noise terms are added to the x,y inputs to generate an alternative stoichiometry: x',y'. In the animated sequence, the starbursts represent the introduction of random inputs or noise to specifc input nodes of the networks. (Step 3: Fe3O4 is recommended.)
  4. The alternative formula is passed to network 5, where the Knoop hardness of hardest potential phase of the compound is calculated. (Step 4: For Fe3O4, a Knoop hardness of 682 is calculated.)

In the first stage of stoichiometry generation, four random numbers (three boolean numbers, representing a binary encoding of an element’s row in the Periodic Table, and one analog number, representing its column coordinate or chemical group) are supplied to each of two ANNs (networks 1 and 2 in Figure 7). For example, lithium would be represented with the input vector seed "0, 1, 0, 0.13," with the successive bits "010" representing row two of the Periodic Table, and 0.13 denoting that the element is found 13 percent of the way across the row. The outputs of these networks then yield elements A and B in an electronic representation, incorporating a similar binary-coded row along with the valence shell electron configuration via s, p, d, and f populations (row left bit, row middle bit, row right bit, s-electrons, p-electrons, d-electrons, f-electrons). Hence, networks 1 or 2 could produce the output "0, 1, 0, 1, 0, 0, 0" for lithium. Using such a representation for chaotically seeding networks 1 and 2, it is possible to rapidly and randomly generate representations of randomly chosen elements A and B without a need to look up tables or formulas for electron shell occupation. As Figure 7 shows, networks 1 and 2 randomly imagine various ground state electron configurations for elements A and B, respectively.

Once networks 1 and 2 imagine candidate elements A and B (inert gases were rejected by the driving algorithm), the respective electron shell configurations pass to network 3, which maps an anticipated or approximate stoichiometry x and y. The training exemplars for this network consisted of 200 binary compounds chosen randomly from standard chemical references. Therefore, if only Fe2O3 had been shown to the network as a training exemplar, the network would predict 2 and 3 for x and y, respectively. Alternatively, had the network been exposed to exemplars of both FeO and Fe2O3, the emerging subscripts predicted by network 3 would be averaged values for the observed subscripts—1.5 and 2, respectively. Thus, the intermediate formula recommended by network 3 produces a likely mean stoichiometry based upon the network’s previous chemical experience. By convention, if networks 1 and 2 yielded identical elements such as C and C, the recommended subscripts are 1 and 1.

Supplied with a representative stoichiometry from network 3, network 4 (an autoassociative network) invents some valid and, perhaps, creative alternatives for x and y. To understand how this process works, recall that within an ANN every stored memory, as well as generalizations of those memories, takes the form of so-called attractor basins. That is, if network 4 were made recurrent and some random seed were provided as inputs, the outputs would gravitate toward a memory of some stored exemplar, such as Fe2O3 or another plausible stoichiometry such as Fe3O4 (the network has imagined this new valence state of iron by generalization from other transition metal oxides). Upon multiple passes through this network, relaying outputs back to inputs, the outputs (or inputs) would progressively move toward some exact stoichiometry such as Fe3O4. Thus, the network falls into one of its attractor basins. This scheme allows us to roam through a number of plausible stoichiometric combinations unseen by the network, yet generalized from the chemical formulas of isoelectronic compounds. (In the operation of this particular cascade, the network was not made recurrent to speed processing. Therefore, stoichiometry was not quantitatively correct. Rather, it served as a rough approximation to yield an estimated or fuzzy ratio of x/y departing from the first stoichiometric guess offered by network 3.)

The final network within the discovery cascade, network 5, is trained to map the completed binary compound AxBy to a projected Knoop hardness value of the compound’s hardest possible phase. Training exemplars for this network were gathered from a variety of sources, including the CRC Handbook, minerological texts, and a number of references featuring hardness measurements for a variety of semiconductors and intermetallics. To make contact with the plentiful minerological examples, a separate network was trained to relate Moh’s scale hardness to Knoop hardness. Training data for the hardness mapping network was limited to high atomic number elements chosen exclusively from beyond the second row of the Periodic Table.

Demo IconThe chosen trainer for this problem was a special back-propagation trainer based on Microsoft Excel and known as NeuralystTM (download a zipped demo version). The root-mean square (RMS) training error was maintained below five percent of the range of output parameters. Testing error was maintained below five percent RMS for all networks involved. All networks employed full connectivity, with all processing units employing sigmoidal squashing functions. Following the training of each network, specially written VBA macros converted the connection weight matrix into linkable spreadsheet networks. Once cut and paste into their respective positions in the CM cascade, each was connected manually by relative reference between the required outputs and inputs.

Those spreadsheet cells representing noise inputs to the cascade structure were supplied with a resident random number routine called "rand()," thus achieving the perturbations necessary at the inputs of networks 1, 2, and 4, as shown in Figure 7. A governing looping algorithm was used to repeatedly drive the feed-forward propagation of noise through the spreadsheet network as well as to provide the interactive graphics used to control and monitor the Excel interface shown in Figure 6.

Figure 8
View Animation

Figure 8. Actual activation patterns across the spreadsheet as new ultrahard materials are imagined. The actual network modules, roughly located by the lavender ridges, are labeled to correspond to the networks depicted in Figure 7. The z-axis represents values appearing in spreadsheet cells, while the x and y coordinates represent rows and columns within the spreadsheet. In many respects, the rising and falling activation levels are reminiscent of cortical activity within the human brain. Analogous to the human internal imagery process, no new information is entering the system. All new information originates internally as a result of the application of noise to the cascade.

In operation, one may simultaneously observe the instantaneous activation level of all processing units of the cascade. Applying the resident Excel x,y,z plotting facility over the spreadsheet region representing the CM cascade, one may view the evolution of activation patterns across the the interconnected networks. This is illustrated by the animation in Figure 8, which comprises time slices of spreadsheet activation along with some general notion of network placement. Static topological features represent network weights and biases, while those in motion signify changing neuronal activations or noise inputs.

As the discovery process proceeds, the spreadsheet automatically logs the Knoop hardness of each binary combination of elements A and B, as well as the corresponding stoichiometric ratio x/y. Therefore, as the CM successively encounters harder stoichiometries of and elemental combinations of A and B, both hardness and the x/y proportionalities are updated within two matrices that appear as real-time displays in the spreadsheet. (Figure 9 depicts the evolution of the hardness matrix for a low atomic number.)

Noise inputs occur at two separate stages of the compound IE. In the first stage, noise input prompts the Monte Carlo generation of ground-state electron configurations of the elements A and B. The level of perturbation can be considered fixed for this process. In the second stage of stoichiometry generation, there is adjustability in the RMS perturbation level as applied to the autoassociative network so that the system may systematically depart from the most common stoichiometry recommended by network 3 toward a novel stoichiometry. Hence, the novelty, as well as the predictive risk, involved in generating new stoichiometries increases with the applied noise level at this stage of the cascade feed through. Therefore, within the spreadsheet discovery system, three levels of perturbation to the x and y inputs of network 3 are allowed for: RMS values of 0, 0.1, and 0.2, as compared to normalized inputs that may vary between 0 and 1. The results reported in this paper were carried out at the intermediate noise level of 0.1.


Before revealing any of the results of preliminary runs of the the CM for ultrahard materials, it should be noted that, except for the benchmark example of diamond, the network cascade possessed no prior knowledge of other ultrahard materials composed solely of elements from the first two rows of the Periodic Table. It functions using only two neural network models—one generalizing what constitutes a plausible stoichiometry between any two given elements, A and B (networks 1-4), and one generally relating chemical formula to the projected hardness of the compound’s hardest phase (network 5). With only the input of stochastic noise, the network cascade autonomously discovers many binary compounds promising high bulk modulus and hardness. Many of these materials are already known to the materials community, and the "blind" rediscovery of these ultrahard compounds by the CM constitutes an example of what psychologists call psychological or "P-creativity."7 On the other hand, many of the CM’s predictions may represent novel, as-yet-undiscovered materials, thus constituting what is known as historical or "H-creativity.

In general, this scheme shares many of the characteristics of human-level discovery, including

  1. A completely neuronal approach to learning and creativity, as in the human brain, which is a complex cascade of biological neural networks (i.e., the CM is implemented completely with a similar cascade of ANNs).
  2. Learning through repeated interaction with the conceptual space (i.e., training the component ANNs).
  3. The internal generation of creativity without the introduction of any additional information (i.e, the introduction of unintelligible noise drives the ANN cascade’s search). (Note: This observation has motivated me to explore the ubiquitous sources of noise within neurobiology as the inpetus and mechanism whereby human beings create and discover.)8

Figure 9
View Animation

View Hardness Survey of All Binary Compounds

Figure 9. Operation of the binary ultrahard materials CM. In an uphill-climb process, the CM discovers successively harder stoichiometries and displays them in a 100 x 100 matrix (only low Z is shown here). Hardness values are color coded from the blue (the softest), to red (the hardest).


Discriminating between P- and H-creativity, this section summarizes the results of the CM as applied to ultrahard materials discovery. An hour-long run of the CM was employed; low levels of noise were applied to networks 1 and 4; RMS fluctuations of 0.1 were used, as compared to the normalized inputs that ranged between 0 and 1. The results are displayed in Figure 9, where most of the ultrahard materials (highlighted in red, indicating a Knoop hardness > 8,000) formed largely between elements found within the first two rows of the Periodic Table. This general discovery registers with what is generally known about ultrahard materials in that small atomic dimensions and large bond energies contribute to their high bond energy densities. Beyond the low atomic numbers, there are some other unexpected discoveries that may deserve experimental corroboration. A ranking of the top 30 predicted ultrahard compounds discovered by this run are shown in Table I.


Preliminary runs of the autonomous materials discovery machine corroborate the general belief that the majority of anticipated ultrahard materials should reside among the binary combinations of elements within the first two rows of the Periodic Table. This explains the red band of ultrahard binary compounds in Figure 8; this band consists of low atomic number elements, largely of carbides, borides, and beryllides. Within this ultrahard grouping, diamond (C-C) is the only ultrahard material known to all components of the CM cascade. All other binaries in this cluster have been reinvented by the neural network cascade, largely by generalizing stoichiometries and hardness values for the materials comprising the training set (primarily compounds consisting of high atomic number elements).

Table I. The Top 30 Predicted Ultrahard Binary Compounds Based on Projected Knoop Hardness (Hk)
Hardness Order A x B y Hk
1 Be 6.000 Fr 1.060 8628
2 C 1.228 C 1.217 8623
3 B 1.391 O 1.689 8605
4 B 1.225 N 1.265 8575
5 Be 6.000 Cs 1.148 8546
6 C 1.239 N 1.262 8530
7 B 5.999 Cs 1.139 8526
8 B 6.000 Fr 1.076 8374
9 Mg 6.000 Fr 1.238 8288
10 B 5.462 Rb 1.330 8182
11 Au 4.453 Zr 5.999 8137
12 Al 6.000 Fr 1.109 8131
13 Be 5.921 Rb 1.448 8102
14 Pt 4.449 Zr 5.999 8040
15 Al 5.996 Cs 1.419 8032
16 Au 4.731 Mn 5.999 8025
17 Au 4.598 V 5.999 8004
18 Be 1.389 O 1.568 7953
19 Pt 4.591 V 5.999 7936
20 Au 4.424 Y 5.999 7909
21 Au 4.374 La 5.999 7903
22 Pt 4.720 Mn 5.999 7900
23 B 1.277 Yb 1.196 7896
24 Au 4.812 Fe 5.999 7859
25 Mg 6.000 Cs 1.774 7839
26 Pt 4.421 Y 5.999 7837
27 Au 4.546 Ti 5.999 7772
28 Pt 4.371 La 5.999 7759
29 C 1.227 Tm 1.186 7759
30 Pt 4.540 Ti 5.999 7750
View Spreadsheet Results for All Binary Compounds.
These p-creative materials discoveries include the following:


More speculative recommendations proposed by the CM include following results:


One deficit in the cascade employed here is the shortage of reliable hardness data for training the component networks. Each network within the casacade architecture was exposed to no more than about 200 exemplars. In the case of both semiconductors and intermetallics, there was an obvious paucity of data. Further, the composition of the training database was eclectic, requiring the integration of various databases, the conversion of a variety of hardness units (e.g., Knoop, GPa, and Moh's scale), and the use of data culled from nonuniform measurement technique (e.g., indentor and loading characteristics). Ideally, we would like to repeat this project using the latest nanoindentor techniques as well as loading-unloading curves on a wide spectrum of compounds.

Another system pathology involves the IE's occasional generation of ions, radicals, and charge complexes. Therefore, in addition to producing species such as H2O and H2O2, the IE proposes species such as H3O+ and OH-—materials attaining the equivalent of inert gas electron configurations. When such materials were generated, they were generally interpreted as existing in combination with other ions to achieve charge neutrality within the derivative crystal lattice.

Currently, a much more ambitious materials CM is under construction, incorporating an IE that checks for charge neutrality and thermodynamic stability for candidate species having as many as five distinct chemical elements. The IE has been trained with more than 10,000 inorganic compounds drawn largely from x-ray crystallographic databases. Generating hypothetical compounds with as many as six elements, this IE will serve to generate a dynamic database of potential chemical compounds. A cascaded associative network may simultaneously predict a wide range of chemical, physical, and, perhaps, medical properties for each of these emerging compounds, allowing us to tailor specific compounds to a variety of requirements.


Fully exploiting the analogy between the spreadsheet cell and the biological neuron, we have built a neural network cascade that is capable of human-level discovery of new ultrahard materials. We have built this virtual discovery machine using only computational neurons within pretrained neural networks, without recourse to algorithmic steps or look-up tables. Realizing that the brain uses similar computational units to achieve creative feats, we consider this purely connectionist model to be a potential model of seminal human cognition. In this particular problem, we see parallels with human creative endeavors, wherein blind rediscovery or breakthrough revelations may occur.

The spreadsheet implementation of the creativity machine paradigm is most conducive to the rapid prototyping of the required network cascade architecture, allowing us to quickly experiment with alternative networks and interconnectivities, finally arriving at the highly successful architecture discussed herein. Encouraged by the cascade’s ability to rediscover both verified and theoretical ultrahard compounds, we tend to attach more significance to some of the more speculative predictions. Hopefully, some of these radical projections reflect very subtle trends within the relatively sparse training database and are, in fact, useful patterns that have evaded the scrutiny of researchers. Alternatively, these predictions may represent the thinking of an inadequately trained system that will require more of an apprenticeship period with the seasoned materials scientist before yielding completely accurate recommendations.

In the meantime, however, we may reliably use such creativity machines to provide educated guesses at regimes that will yield important materials breakthroughs.


I thank Steven LeClair of Wright Laboratories, who was central to the motivation and editing of this JOM-e article. Similar thanks go to Allen Jackson of Wright Laboratory and Irwin Singer at the Naval Research Laboratory's tribological laboratories, both of whom have both provided their intuitions in reviewing the preditions of the creativity machine for ultrahard materials.


1. S.L. Thaler, "4-2-4 Encoder Death," Proceedings of the World Congress on Neural Networks 2 (Mahwah, NJ: Lawrence Earlbaum & Associates, 1993), pp. 180-183.
2. S.L. Thaler, "Virtual Input Phenomena within the Death of Simple Pattern Associator," Neural Networks, 8 (1) (1995), pp. 55-65.
3. P. Yam, "As They Lay Dying...Near the End, Artificial Neural Networks Become Creative," Scientific American, 272 (5) (1995), pp. 24-25.
4. S.L. Thaler, "Neural Networks that Create and Discover," PC AI, (May/June 1996), p. 16.
5. S.L. Thaler, "Cavitation via Network Cavitation—an Architecture, Implementation, and Results," Proceedings of the Adaptive Distributive Parallel Computing Symposium (1996), pp. 83-90.
6. S.L. Thaler, "A Proposed Symbolism for Network-Implemented Discovery Processes," Proceedings of the World Congress on Neural Networks 1996 (Mahwah, NJ: Lawrence Erlbaum & Associates, 1996), pp. 1265-1268.
7. M.A. Boden, The Creative Mind (London: George Weidenfeld, 1990), p. 32.
8. S.L. Thaler, "Is Neuronal Chaos the Source of Stream of Consciousness?" Proceedings of the World Congress on Neural Networks 1996 (Mahwah, NJ: Lawrence Erlbaum & Associates, 1996), pp. 1255-1258.
9. P.K. Lam, M.L. Cohen, and G. Martinez, "Analytic Relation between Bulk Moduli and Lattice Constants," Phys. Rev. B, 35, p. 9190.
10. A.Y. Liu and M.L. Cohen, Science, 245, p. 841.
11. J. Russell, "Theoretical Projection of Superhard Materials," Diamond Deposition Science & Technology, 5 (3) (March 3, 1995).

S.L. Thaler earned his Ph.D. in physics from the University of Missouri at Columbia. He is currently president and chief executive officer of Imagination Engines, Inc., in St. Louis, Missouri.

For more information, contact S.L. Thaler, Imagination Engines, Inc., 12906 Autumn View Drive, St. Louis, Missouri 63146; (314) 576-1617; fax (314) 434-8591; e-mail

Will You Take a Short Survey?

The presentation of technical articles on the World Wide Web without a print counterpart is an experimental exercise by JOM. To help us determine the value of this effort, please complete the following brief survey:
Merit of Web-Only Publication
In terms of archival value and prestige...

Web-only publication is superior to print publication.
Web-only publication is equivalent to print publication
Web-only publication is inferior to print publication
About You
I am a member of TMS and/or a subscriber to JOM
I am neither a member of TMS nor a subscriber to JOM

Copyright held by The Minerals, Metals & Materials Society, 1997

Direct questions about this or any other JOM page to

Search TMS Document Center Subscriptions Other Hypertext Articles JOM TMS OnLine