http://www.theverge.com/2015/4/15/8420163/self-powered-camera-3d-printed-columbia
Monthly Archives: April 2015
Daily Links Monday 4/13/15
Google Analytics Search Options
- Exactly matches
- Contains (default)
- Starts with
- Ends with
- Matches regex
- Is one of
- Does not exactly match
- Does not contain
- Does not start with
- Does not end with
- Does not match regex
- Is not one of
Captured: 4/13/15
Harvard – CS109 Data Science
input = 212
result:
http://stackoverflow.com/questions/13633085/is-there-a-php-echo-print-equivalent-in-javascript
Coursera – Practical Machine Learning Resources
- List of machine learning resources on Quora
- List of machine learning resources from Science
- Advanced notes from MIT open courseware
- Advanced notes from CMU
- Kaggle – machine learning competitions
Resources from Practical Machine Learning on Coursera
by Jeffrey Leek Lecture 1 slides
Link Title Generator
Resources:
- http://www.pythoncentral.io/cutting-and-slicing-strings-in-python/ [segment string python]
- http://www.tutorialspoint.com/python/string_len.htm [python string length]
- http://www.w3schools.com/tags/att_a_target.asp
Errors:
- python string indices must be integers solved by using the correct separator (“:” versus “,”)
Deep Learning: Tutorials
Deep Learning Code Tutorials
The Deep Learning Tutorials (DeepLearning.net) are a walk-through with code for several important Deep Architectures (in progress; teaching material for Yoshua Bengio’s IFT6266 course).
Unsupervised Feature and Deep Learning
Stanford’s Unsupervised Feature and Deep Learning (UFLDL) tutorial has wiki pages and Matlab code examples for several basic concepts and algorithms used for unsupervised feature learning and deep learning.
Adapted from http://deeplearning.net/reading-list/tutorials/ on 4/4/15
Deep Lerning: Reading List
Deep Learning: Software
Software links
C++ or C++/CUDA:
- Cuda-Convnet – A fast C++/CUDA implementation of convolutional (or more generally, feed-forward) neural networks. It can model arbitrary layer connectivity and network depth. Any directed acyclic graph of layers will do. Training is done using the back-propagation algorithm.
- CXXNET – CXXNET is fast, concise, distributed deep learning framework based on MShadow. It is a lightweight and easy extensible C++/CUDA neural network toolkit with friendly Python/Matlab interface for training and prediction.
- Eblearn is a C++ machine learning library with a BSD license for energy-based learning, convolutional networks, vision/recognition applications, etc. EBLearn is primarily maintained by Pierre Sermanet at NYU.
- MShadow – MShadow is a lightweight CPU/GPU Matrix/Tensor Template Library in C++/CUDA. The goal of mshadow is to support efficient, device invariant and simple tensor library for machine learning project that aims for both simplicity and performance. Supports CPU/GPU/Multi-GPU and distributed system.
- The CUV Library (github link) is a C++ framework with python bindings for easy use of Nvidia CUDA functions on matrices. It contains an RBM implementation, as well as annealed importance sampling code and code to calculate the partition function exactly (from AIS lab at University of Bonn).
Java:
- neuralnetworks is a java based gpu library for deep learning algorithms.
LUSH:
- The LUSH programming language and development environment, which is used @ NYU for deep convolutional networks
- Eblearn.lsh is a LUSH-based machine learning library for doing Energy-Based Learning. It includes code for “Predictive Sparse Decomposition” and other sparse auto-encoder methods for unsupervised learning. Koray Kavukcuoglu provides Eblearn code for several deep learning papers on this page.
Matlab/Octave:
- ConvNet is a matlab based convolutional neural network toolbox.
- Deep Belief Networks. Matlab code for learning Deep Belief Networks (from Ruslan Salakhutdinov).
- DeepLearnToolbox – A Matlab toolbox for Deep Learning (from Rasmus Berg Palm)
- deepmat– Deepmat, Matlab based deep learning algorithms.
- Estimating Partition Functions of RBM’s. Matlab code for estimating partition functions of Restricted Boltzmann Machines using Annealed Importance Sampling (from Ruslan Salakhutdinov).
- Learning Deep Boltzmann Machines Matlab code for training and fine-tuning Deep Boltzmann Machines (from Ruslan Salakhutdinov).
- Matlab code for training conditional RBMs/DBNs and factored conditional RBMs(from Graham Taylor).
Python:
- cudamat is a GPU-based matrix library for Python. Example code for training Neural Networks and Restricted Boltzmann Machines is included.
- Gnumpy is a Python module that interfaces in a way almost identical to numpy, but does its computations on your computer’s GPU. It runs on top of cudamat.
- 3-way factored RBM and mcRBM is python code calling CUDAMat to train models of natural images (from Marc’Aurelio Ranzato).
- mPoT is python code using CUDAMat and gnumpy to train models of natural images (from Marc’Aurelio Ranzato).
- Theano – CPU/GPU symbolic expression compiler in python (from LISA lab at University of Montreal)
- Pylearn2 – Pylearn2 is a library designed to make machine learning research easy.
Miscellaneous:
- Nengo-Nengo is a graphical and scripting based software package for simulating large-scale neural systems.
- RNNLM– Tomas Mikolov’s Recurrent Neural Network based Language models Toolkit.
- Torch – provides a Matlab-like environment for state-of-the-art machine learning algorithms in lua (from Ronan Collobert, Clement Farabet and Koray Kavukcuoglu)
Categorized/organized version of http://deeplearning.net/software_links/ as of 4/4/15