The discussion on machine learning is the buzz word in the current tech tectonic waves. It’s all amber-hot as the machines are nearly on the verge of learning to do things and react to the environment around them in more human-like-unsupervised-design.

Machine learning is a field of neuro-computational studies, in which scientists work towards instilling learning capabilities in a computer memory. This is all possible through implementation of a set of supervised or unsupervised rules referred to as algorithms. The most popular machine learning algorithms are artificial neural networks, support vector machines, k-nearest neighbors and decision trees. We will discuss the decision trees in this context.

A decision tree is a kind of graphing tool that employs the use of branches in the illustration of all the possible solutions to a problem. They assist in keeping the problem approach focused towards the optimal goal by assigning ranked value to possible outcomes, and pruning the lowly valued “branches”. This aids in the automation of the whole decision making process.

Decision making software’s are readily available and are widely used in data analytics in the real word, due to their usefulness and versatility in data exploration. For example, one can construct an emergency response decision tree for medical doctors:


Decisions trees have been extensively and actively involved in various application domains with great success. Some of the key application domains are:

In machine learning algorithms, decision trees are widely used in classification and prediction problems and they are powerful tools for data extraction and mining. They classify the input variables into branched segments that ultimately make an inverted decision tree with a root node acting as a decision assessment entry point, leaf nodes and internal nodes acting as possible decision gateways and endpoints. The power of decision trees in decision making processes lies in its non-parametric structural design, thus makes it effective in handling large size datasets without problems of overfitting experienced in machine learning algorithms such as backpropagation and gradient descent.

Having heard or read a lot about the machine learning, the most probable question lingering in your machine-learning-drowned mind is: Is machine learning for everyone? Well, I am going to show you how to use BigML to fill that gap. Having already unraveled that most machine learning problems are predictive in nature, BigML is versatile tool that offers machine learning service providing you a simple interface for importation of data for predictions purposes. The power of BigML is harnessed for its easy-to-use design as the user doesn’t require almost any prior machine learning wealth of knowledge, it’s designed for “one-click” usage mode.

BigML, Machine learning for everyone

We will create a classification model based on freely available dataset. The iris dataset. In machine learning this is known as the training set. The model is representation of the real world data features relationships learned through the BigML. The BigML represents its output in form of decision trees. After the training process we shall feed our model with new data inputs and ask a prediction of the corresponding class for each the new data point in our case the class each iris petal belong to.

The iris dataset is a representation of three types of irises (Setosa, Versicolor, and Virginica) petal and sepal length. It’s a good dataset just to explore the step-by-step usage of BigML.

Getting started

Create a free account on the BigML and after you are done with the registration process, login to your BigML dashboard page.

For this demonstration we will be using the BigML in the Development Mode which is a free plan mode.

You should be seeing the following screen.


Click on the dataset to have look on it.


The predictor parameter

The last line is the parameter we aim to train our model to predict as shown in the screen below:


Dataset clean up

This process involves manual configuration of your dataset in case you want to eliminate some values. In our case it’s a simple model for demonstrative purposes and our data is suited for this task with minimal clean-up. But for demonstrative purpose, the following screen shows how to access the manual data configuration tool:


Our predictive model

This is where you have a “taste” and the versatility of BigML. You configure your model by selecting BigML “1-click” features as shown below.


By default, as already said, in BigML the predictive model is represented in form of decision tree.


The decision tree shows you how BigML classified the different iris classes according to their attributes. By clicking on a node you can see how BigML grouped certain classes together and why. The panel on the right shows what the iris classes in the selected node have in common.


Model Evaluation

Now that we have a predictive model, it’s wise we access its predictive capabilities and accuracy. To achieve this, we shall split our dataset into two:

  • The first dataset is called the training dataset, represents 80% of the original dataset and will be used to create a training model, exactly how we just did previously with the original dataset.
  • The second dataset is called the test dataset; it represents the remaining 20% of the original dataset.

We then run an evaluation where the model (built from the training set) will be used to make predictions on the inputs of the test set, and these predictions will be compared to the outputs of the test set.

All of these steps can be done easily in BigML, here is how.


Below is what you should have by now.


We can now easily create the training model as shown below.


We have successfully created a test dataset and a model from the training dataset. Now we task the BigML to use these two to do an evaluation of the model and to assess its level of accuracy.

Navigate to “Evaluate” tab, see below.




There are a number of language bindings and wrappers available for BigML. On the Python side, BigML Python bindings provide a convenient API to create, retrieve, list, update, and delete BigML resources (i.e., sources, datasets, models and, predictions). Python 2.7 and Python 3 are currently supported by these bindings.

I also used BigML to determine the topics underlying a collection of documents with very appreciable results. In particular, BigML offers an optimized implementation of Latent Dirichlet Allocation that is a probabilistic unsupervised learning method.

BigML can be used for a variety of Machine Learning problems as described here: What Machine Learning algorithms does BigML offer?. Moreover, BigMLKit brings the ease of “one-click-to-predict” to iOS and OS X devices.

And, as icing on the cake, for non-trivial contexts that combine different algorithms into a complex workflow (pipeline), the BigML team proposes WhizzML. WhizzML is a programming language designed specifically for building automatic machine learning pipelines in order to orchestrate different procedures and algorithms with the guarantee of scalable execution on the BigML SaaS platform.

Posted by lorenzo

Full-time engineer. I like to write about data science and artificial intelligence.

Vuoi commentare?