Neural Patches

A Codeless Interface for Training Neural Networks

last updated 6/8/2021

 

Get paid for training “neural patches” to detect a micro-feature using a free mobile app today.

GET STARTED NOW!

 

 

Background

In 2006, researchers used fMRI (functional magnetic resonance imaging) and electrical recordings of individual nerve cells to find parts of the brain that became active when macaque monkeys observe another money's face. This study indicated that some of these parts of the brain are sensitive to specific orientations of the face, while others are sensitive to more specific features. These parts of the brain are "triggered" when they detect the micro-features they are sensitive to. These sensitive parts of the brain are referred to as Neural Patches.

At Advanced Kernels, we are currently adopting a similar approach to neural network training. We do this by accumulating a large number of small fine-grain trained neural patches which target specific features, and are triggered by and can trigger other neural patches. The neural patches are simple and thus the training is more data-centric, i.e focuses on training data and methods, as opposed to the more traditional end-to-end training which focuses on network topology. and sub-type classification.

 

We have developed a framework with which developers can train fine-grain neural patches on top of a portable base neural network. The neural patches can be applied as plugins to companion mobile apps (iOS and Android) to be tested in realistic dynamic settings.

How to start earning…

You can start today by training a neural patch to detect a micro-feature. If you successfully train a neural patch and it works well it will be made available in your name for others to download and use and a pluging in the accompanying mobile apps.

 

The tool for neural patch training is a self-contained click-button solution available on the Windows Store. The app solution does not require any advanced knowledge of neural networks or access to a high-end graphics card (you’d be fine with any decent discrete or integrated graphics with OpenCL support; Intel, Nvidia, or AMD).

Once you have completed the training, you can export a plugin file (with extension .specialneurons). This file can be deployed to mobile devices as a plugin. The plugin contains the neural network weights, base-network output conditions on which the patch is triggered, and an optional icon image to be displayed when it is triggered.

 

How the Inference Works

The free mobile app can be accessed here for iOS and Android. Note that the Android version does require a device with OpenCL support - recent Samsung, LG, Sony, and Motorola devices offer this feature.

 

Tapping the camera button in the top right corner of the app enables live on-device object detection with the base neural network for the 1000 ImageNet categories. A progress bar at the bottom of the app screen indicates progress through the neural network layers, and at the end of each inference, the main identified object’s name is indicated above the progress bar.

 

There is also a plugin button (with a plus sign) in the bottom right corner of the app. Tapping this button takes you to a pre-trained neural patch download page. Also, long pressing this button opens a file browser for plugging in locally stored neural patches. Patches can have an optional indicator image in their definition that is displayed to notify when the patch has been triggered at the end of a base network pass.

There is a snap button in the bottom left corner of the app that can be used to take pictures of objects that are incorrectly identified by a patch. Backing out of live camera mode takes you to the start screen where the taken image can be forwarded for dataset enhancement.

 

The mobile app layout

For a neural patch to be triggered (after the initial pass), its potential categories need to have at least one overlap with the base neural network’s categories. Patches can have categories defined in them only for this triggering purposes, which never get identified as a category of the patch. Such categories are not present in the outcome categories when training a neural patch, but are appended to the possible outcomes of the patch only to make the patch trigger on certain base categories. This is useful for identifying super-categories from among a group of base categories (e.g. identifying big from small dogs regardless of breed).

Example of a contained dog breed patch, and a spider type patch that extends the categories

 

How the Training Works

Traditional neural network training frameworks require scripting language experience (usually Python) and some familiarity with numerous support packages. However, our neural patch training framework focuses on simple predefined topologies and therefore does not require the extensive flexibility of a scripting language. Instead, the focus is on ensuring it's easy to use, to allow the neural patch developer to focus on data collection. Another major advantage of our training framework is that it can be set up quickly through a single installable file. The interface is simple and easy to use.

To begin training a patch, all a developer needs to provide are the paths to the training image data and a base network snapshot to start from.

 

For the first run, download and unzip this file and point to the .txt in the folder as the base network’s configuration (note: the .txt and .data files will need to be kept in the same path when loading the base network).

There are fields to set the batch size, learning rate, momentum, and decay rates. [A1] 

That’s it, you’re good to go! You can start training with the click of a button.

 

 

The App Layout

The main window of the application displays the images randomly being fed to the training pipeline in real-time. A side window logs the progression of the training loss. The rand-tilt-shift checkmark will enable/disable image augmentation, with random tilts and color shifts.

There’s also a stop-N-show checkmark to pause training at any point. This will also display the current inference outcome, and provide the option to save a snapshot of the neural patch and export the final distributable patch.

 

 

Specifically, setting the stop-N-show checkmark at any point during the training process will display the message box below[A2] , which will indicate the outcome of the last inference.

It will also ask whether to save the current snapshot or not. If you select “No”, it proceeds to the next inference with the same message box. If you select “Yes”, the current full snapshot [A3] (which can be used for further training) and a patch snapshot (which can be used in the mobile app) will be dumped in the location of the base network files (tagged with a time-based hash). Remove the stop-N-show checkmark in the main window, and then select “No” in the message box, to proceed with training (watch out for the message box getting hidden behind the main window).

 

 

On starting the training process for a new neural patch, the last layer will need reconfiguring to match the number of categories of the patch. This is because the number of output neurons of the patch will most likely not match 1000 of the base network. The training tool will automatically do this after displaying a dialog box “About”. Selecting “Yes” will start the training process with the last layer reconfigured for the correct number of output categories.

 

 

The developer defines the number of output neurons of the patch simply by specifying the number of different categories in the map file of the training data. The next section looks at how the training data needs to be formatted.

 

Structuring the Training Image Data

The training images need to be structured in a folder with a map_clsloc.txt in its root. This file defines the categories of images in the folder that will be used for training, with their displayable names, and in an order that will determine the neural network output indexes assigned to each.

As a reference, this is the base neural network’s map_clsloc.txt file. The content of the file consists of three columns, with each row corresponding to one image class. The first column indicates the class tags, followed by an index number, and the last column is a space-less name indicator of each class.

The tags of the base network are what are referred to in ImageNet as their Synset. For consistency, it is good practice to stick with this convention for the class tags when adding new classes that are not in the base network. However, it is not necessary or required. All that is required of tags is that they start with the character ‘n’ and be followed with a unique integer number.

The names of the images of all different categories need to start with their identified tag followed by a ‘_’ (underscore) character. Any extra identification for individual images can follow that. They can all be placed in the root of this folder, but it is advised to place each category in a separate folder for manageability. The training tool will do a recursive search through all folders. This is an example of a simple image data folder (for dog bread detection).

 

This is a walkthrough (no audio) of how to organize the data and train a neural patch from the base network, for detecting if a shark is being observed from underwater or above water.

 

The desktop training application can be downloaded from the Windows Store.

 

Submit Trained Neural Patches for Review

Send the exported .specialneurons file as an attachment to submission@advancedkernels.com. The extension must be tested on Android and iOS devices must have a valid trigger icon (set at an export time in the Betect Train tool). Describe what the neural patch distinguishes, and what classification it is triggered on. We will test it, and if suitable and interesting will publish it to the neural market for download by users.

 

 

 


 [A1]Infographic

 [A2]Refer to website

 [A3]Refer to website