Hippostruct

Neural constructive nets

Layers, weights and nodes, like you've never seen them before. Neural constructive nets™ build upon the artificial neural network paradigm, utilising prospective weight adjustment as opposed to traditional backpropagation. Optimisation leverages established methods like stochastic gradient descent, except that each training epoch is associated with its own unique network. Each net's weights are adjusted exactly once, in a process that we term construction, and which is of equal importance to the network architecture itself.

Robust neural net architecture
Training via construction
Native functional programming
State-of-the art linear optimisation

Forward propagation. Why look backwards?

We refuse to halt the inexorable march of technology by constantly looking backwards. We believe in being proactive rather than reactive. Why wait for error to rear its head before taking action? NCNs work by propagating an error signal forwards through the network: prospective correction for error, before it has even occurred. Forward propagation recasts learning not as an optimisation problem per se, but as a microcosmic process mimicking the progress of human endeavour through the ages.

"By adjusting only one weight per layer, our unique Hot Path Training Strategy (HPTS) is not only lightning-fast, but virtually eliminates the issue of overfitting."

Real memories, real intelligence

The notion of "memory" in artificial intelligence typically denotes a recurrent neural network architecture, consisting of feedback connections as well as feedforward connections. However, the so-termed long short-term memory architecture can take us only so far. Structurally, our approach is more akin to a hidden Markov model, using a modified forward algorithm encorporating prospective observations alongside historical observations. Most importantly, to achieve true memory, we use input vectors comprising real human memories.

Interactive demonstration

The below demonstration provides a high-level introduction to the NCN paradigm. To begin, select any node from the input layer. For one node in each layer, you may adjust the weight associated with its connection to a prospective node in the adjacent layer, thereby correcting for notional error before it is encountered. Once a weight has been selected in a given layer, the next layer becomes available—its constituent nodes will flash to indicate this.

0 1

Chat with us

Hi! Which topic can I assist you with?
  • NCNs
  • Memories
  • Volunteer