Deep Learning Algorithms: Somethings to ponder about

Deep Learning

The concept of artificial intelligence has been around as long as the idea of the computer itself. But solving artificial intelligence problems with a set of predefined rules proved to be difficult, if not impossible. Recently, with access to huge amounts of data, older machine learning algorithms have been producing extraordinary accuracy in doing tasks.

How does that work? Let’s first define what we mean by “learning”. A group of neural network based algorithms that are able to leverage enormous supervised or unsupervised datasets to give very accurate results are called deep learning algorithms. Google and some other Silicon Valley companies are leading this specific research field along with some highly regarded universities.

Recently, some companies and universities have been releasing and open sourcing their deep learning frameworks. This trend is giving developers  a  chance learn  with deep learning algorithms.

A brief overview of some popular frameworks are given below in no particular order.

Caffe

Caffe is a deep learning  framework developed by Berkeley Vision and Learning Center(BVLC) made with expression, speed, and modularity in mind. This project was created by  Yangqing Jia during his Phd. Now the project is sponsored by some  tech giants  like  Facebook, Microsoft, Intel, Nvidia and Samsung

The features  that make Caffe  unique are its  expressive architecture ,extensible code, speed and community. Caffe’s expressive architecture encourages application and innovation. Model and optimization are defined without hard coding as is switching between CPU and GPU by setting a single flag to train on a GPU machine then deploy to commodity clusters or mobile devices.

Extensible  code fosters active development and the speed makes Caffe perfect for research experiment and industrial deployment.Caffe can process over 60M images per day with a single NVIDIA K40 GPU*. That’s 1 ms/image for inference and 4 ms/image for learning.In addition the Caffe community is already boosting academic research projects, startup prototypes, and even large-scale industrial applications in vision, speech, and multimedia.

However, User opinions based on their experience with Caffe have found  some glitches.. Protobuf, layers tends to get in the way once you try to define your own layer types and  multi GPU training is supported partially but not in all the different ways, such as models or data sets

Tensorflow

Tensorflow is an open source software library for  numerical computation using data flow graphs. It is Google brain’s second generation machine learning system that has an emphasis on deep learning  with reference implementation released in November 9, 2015. Tensorflow’s flexible architecture allows it to deploy computation to multiple CPU or GPU in a desktop server or mobile device in a single API.

Tensorflow has numerous  features  But the key features are  its deep flexibility, true portability , auto differentiation and maximize performance. This library is not a rigid  neural network library . For expressing  computation as a data flow graph one can construct the graph and write an  inner loop for drive computation. Tensorflow also provides various tools to assemble subgraphs common  in a neural network but Users can write their own higher  level libraries on top of Tensorflow. Tensorflow runs on desktop , mobile computing  platform. So if a developer wants to create a deep learning concept and  want to implement it without need of special hardware , Tensorflow will cover that need.

Last and by no means least, Tensorflow benefits gradient based algorithms by its capability of  automatic differentiation . To accomplish this, the User needs to define the computational architecture of a predictive model combined  with an objective function and add data. Tensorflow will handle and make derivatives. Tensorflow has  support for threads, queues and asynchronous computation and it is very adjustable with every available hardware.

Tensorflow is not complete as of yet. Google made an initial release of the source code and is continuing to work actively to improve it.

Torch

Torch is a scientific computing framework that  provides  wide support for deep machine learning algorithms specifically focused for GPU based learning. It uses the scripting  language LUAJit  which is actually a underlying CUDA  implementation.

Torch’s goal  is to have maximum flexibility and speed  in building scientific algorithms  while making the process simpler. The framework comes with a  large ecosystem community driver package in deep learning, computer vision, signal processing , parallel processing , image ,video  and networking among others .

Torch core features are

  • a powerful N-dimensional array
  • Many routines for indexing, slicing, transposing, …
  • amazing interface to C, via LuaJIT
  • linear algebra routines
  • neural network, and energy-based models
  • numeric optimization routines
  • Fast and efficient GPU support
  • Embeddable, with ports to iOS, Android and FPGA backends

At it core,  Torch consists of popular neural network optimization libraries that are simpler to  use while having maximum flexibility in implementing complex neural network topologies

From this, you can quickly see that machine learning is no longer the gated community it once was, but is now open for all, which can only benefit us all.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>