Our part-time Data Science in Santiago course gives you the skills you need to launch your career in a data science team, in 24 weeks studying some weekday evenings and Saturdays. From Pandas to Deep Learning, you will finish the course knowing how to explore, clean and transform data into actionable insights and how to implement machine learning models from start to finish in a production environment, working in teams with the best-in-class tool belt.
Our Data Science course is designed to make you learn step by step, from the basic data toolkit in Python to implementing Machine Learning model in a production environment.
Our Data Science course is very intense. To save time and nail it from the beginning, our students must complete an online preparation work before starting the bootcamp. This work takes around 40 hours and covers the basics of Python, the pre-requisite language of the course, and some mathematical topics used every day by data scientists.
Learn programming in Python, how to work with Jupyter Notebook and to use powerful Python libraries like Pandas and NumPy to explore and analyze big data sets. Collect data from various sources, including CSV files, SQL queries on relational databases, Google Big Query, APIs and Web scraping.
Learn how to formulate a good question and how to answer it by building the right SQL query. This module will cover schema architecture and then dive deep into the advanced manipulation of SELECT to extract useful information from a stand-alone database or using a SQL client software like DBeaver.
Make your data analyses more visual and understandable by including data visualizations in your Notebook. Learn how to plot your data frames using Python libraries such as matplotlib and seaborn and transform your data into actionable insights.
Understand the underlying math behind all the libraries and models used in the bootcamp. Become comfortable with the basic concepts of statistics & probabilities (mean, variance, random variable, Bayes’s Theorem, etc.) and with matrix computation, at the core of numerical operations in libraries like Pandas and Numpy.
You'll learn how to structure a Python repository with object-oriented programming in order to clean your code and make it re-usable, how to survive the data preparation phase of a vast dataset, and how to find and interpret meaningful statistical results based on multivariate regression models
Data analysts are meant to communicate their findings to non-technical audiences: You will learn how to create impact by explaining your technical insights and turn them into business decisions using cost/benefits analysis. You'll be able to share your progress, present and compare your results to your teammates.
Learn how to explore, clean, and prepare your dataset through preprocessing techniques like vectorization. Get familiar with the classic models of supervised learning - linear and logistic regressions. Learn how to solve prediction and classification tasks with the Python library scikit-learn using learning algorithms like KNN (k-nearest neighbors).
Implement training and testing phases to make sure your model can be generalised to unseen data and deployed in production with predictable accuracy. Learn how to prevent overfitting using regularization methods and how to chose the right loss function to improve your model's accuracy.
Evaluate your model's performance by defining what to optimise and the right error metrics in order to assess your business impact. Improve your model's performance with validation methods such as cross validation or hyperparameter tuning. Finally, discover a powerful supervised learning method called SVM (Support Vector Machines).
Move to unsupervised learning and implement methods like PCA for dimensionality reduction or clustering for discovering groups in a data set. Complete your toolbelt with ensemble method that combine other models to improve performance, such as Random Forest or Gradient Boosting.
Unveil the magic behind Deep Learning by understanding the architecture of neural networks (neurons, layers, stacks) and their parameters (activations, losses, optimizers). Become autonomous to build your own networks, especially to work with images, times and text, while learning the techniques and tricks that make Deep Learning work.
Go further into computer vision with Convolutional Neural Networks, architectures designed to take the most out of images. Improve your model generalization thanks to data augmentation techniques and implement advanced methods to benefit from state-of-the-art architectures thanks to Transfer learning methods.
Get comfortable into managing sequential data and text (sequence of words) by transforming them into appropriate inputs. Leverage the power of Recurrent Neural Networks to forecast future values and perform valuable Natural Language Processing.
Move from Jupyter Notebook to a code editor and learn how to setup a machine learning project in the right way in order to quickly and confidently iterate. Learn how to convert a machine learning model into a model with a robust and scalable pipeline with sklearn-pipeline using encoders and transformers.
Building a machine learning model from start to finish requires a lot of data preparation, experimentation, iteration and tuning. We'll teach you how to do your feature engineering and hyperparameter tuning in order to build the best model. For this, we will leverage a library called MLflow.
Finally, we'll show you how to deploy your code and model to production. Using Google Cloud AI Platform, you'll be able to train your model at scale, package it and make it available to the world. Cherry on top, you will use a Docker environment to deploy your own RESTful Flask API which could be plugged to any front-end interface.
You'll spend the last two weeks on a group project working on an exciting data science problem you want to solve! As a team, you'll learn how to collaborate efficiently on a real data science project through a common Python repository and the Git flow. You will use a mix of your own datasets (if you have any from your company / non-profit organisation) and open-data repositories (Government initiatives, Kaggle, etc.). It will be a great way to practise all the tools, techniques and methodologies covered in the Data Science Course and will make you realize how autonomous you have become.
Learn to code in 24 weeks with a tailor-made program adapted to your busy schedule.
Live a unique learning experience every week.
Meet your peers and teachers three times a week to work on coding challenges. Learn to think and solve problems like a software developer.
Watch lectures at your own pace on our online platform. Grasp core concepts and prepare yourself for the next coding session. Consolidate your knowledge on a daily basis playing our Flashcards.
Every week, join us for events with entrepreneurs and hiring partners. Create your own network within a thriving tech scene.
Since day one, we’ve taken teaching seriously. Great teachers inspire us to connect to topics on a profound level. Experience as a developer alone doesn’t necessarily make one an effective teacher — that’s why we’re passionate about finding not only great engineers, but deeply committed, experienced teachers.
Sebastián had been working for 5 years in the airline industry within Latin-America when he realized that the world was shifting towards tech. With this new aim in mind, he learned to code and today has deployed several platforms using RoR & React.more about Sebastian
After 5 years of working as a PM in Paris, he got bored and started working in web development building products. During his last years, he has co-founded 2 startups in Buenos Aires as CTO. He is currently working in Rappi as Head of Subscription.more about Remi
After years of working as a web & App developer, he became an independent consultant. He is proficient in several web technologies and has recently joined Le Wagon Buenos Aires dev team. He is passionate about teaching and loved the tech industry.more about Felipe
Our Data Science course is just the beginning of the journey. Once you graduate, you belong to a global tech community and have access to our online platform to keep learning and growing.
Get tips and advice from professional data scientists & data analysts, access exclusive job and freelance opportunities from entrepreneurs & developers.
Access our online education platform at any time after the course: you will find all data science lectures, screencasts, challenges and flashcards.
Benefit from our global community of 9322 alumni working in data-related roles, but also entrepreneurs, developers and product managers all over the world.
Our different courses are running in 40 campuses all over the world: wherever you go, you belong to the Le Wagon community!
Once the course ends, you benefit from our career services. We help you meet with the best recruiters and connect with relevant alumni.
Access a complete guide to kick-start your Data Science career after the course: boost your portfolio, find your dream job, leverage on our 9322 alumni community.
Attend our job fairs and networking events, meet with the best tech companies and receive offers by recruiters looking for talent in data-related roles.
Our data science course alumni love to share their experiences with fresh graduates: they explain how they found their job as Data Scientist, Data Analyst or Data Engineer.
Our local teams know their alumni and hiring partners, what they are up to and what they are looking for. They introduce you to the right people depending on your goal.
The best companies partner with Le Wagon and hire our alumni as Data Scientist, Data Analyst or Data Engineer.