A Practical Intro to Data Science

A Practical Intro to Data Science

There are plenty of articles and discussions on the web about what data science is, what qualities define a data scientist, how to nurture them, and how you should position yourself to be a competitive applicant. There are far fewer resources out there about the steps to take in order to obtain the skills necessary to practice this elusive discipline. Here we will provide a collection of freely accessible materials and content to jumpstart your understanding of the theory and tools of Data Science.

At Zipfian Academy, we believe that everyone learns at different paces and in different ways. If you prefer a more structured and intentional learning environment, we run a 12 week immersive bootcamp training people to become data scientists through hands-on projects and real-world applications. We also host a free Skillshare course covering much of this material at a high level.

We would love to hear your opinions on what qualities make great data scientists, what a data science curriculum should cover, and what skills are most valuable for data scientists to know. Share your thoughts over at Hacker News!

_While the information contained in these resources is a great guide and reference, the best way to become a data scientist is to make, create, and share!

Environment

While the emerging field of data science is not tied to any specific tools, there are certain languages and frameworks that have become the bread and butter for those working in the field. We recommend Python as the programming language of choice for aspiring data scientists due to its general purpose applicability, a gentle (or firm) learning curve, and — perhaps the most compelling reason — the rich ecosystem of resources and libraries actively used by the scientific community.

Development

When learning a new language in a new domain, it helps immensely to have an interactive environment to explore and to receive immediate feedback. IPython provides an interactive REPL which also allows you to integrate a wide variety of frameworks (including R) into your Python programs.

Statistics

It is often said that a data scientist is someone who is better at software engineering than a statistician and better at statistics than any software engineer. As such, statistical inference underpins much of the theory behind data analysis and a solid foundation of statistical methods and probability serves as a stepping stone into the world of data science.

Courses

While R is the de facto standard for performing statistical analysis, it has quite a high learning curve and there are other areas of data science for which it is not well suited. To avoid learning a new language for a specific problem domain, we recommend trying to perform the exercises of these courses with Python and its numerous statistical libraries. You will find that much of the functionality of R can be replicated with NumPy, SciPy, matplotlib, and pandas.

Books

Well written books can be a great reference (and supplement) to these courses, and also provide a more independent learning experience. These may be useful if you already have some knowledge of the subject or just need to fill in some gaps in your understanding:

Machine Learning/Algorithms

A solid base of Computer Science and algorithms is essential for an aspiring data scientist. Luckily there are a wealth of great resources online, and machine learning is one of the more lucrative (and advanced) skills of a data scientist.

Courses

Books

Data ingestion and cleaning

One of the most under-appreciated aspects of data science is the cleaning and munging of data that often represents the most significant time sink during analysis. While there is never a silver bullet for such a problem, knowing the right tools, techniques, and approaches can help minimize time spent wrangling data.

Courses

Tutorials

  • Predictive Analytics: Data Preparation: An introduction to the concepts and techniques of sampling data, accounting for erroneous values, and manipulating the data to transform it into acceptable formats.

Tools

  • OpenRefine (formerly Google Refine): A powerful tool for working with messy data, cleaning, transforming, extending it with web services, and linking to databases. Think Excel on steroids.

  • DataWrangler: Stanford research project that provides an interactive tool for data cleaning and transformation.

  • sed: “The ultimate stream editor” — used to process files with regular expressions often used for substitution.

  • awk: “Another cornerstone of UNIX shell programming” — used for processing rows and columns of information.

Visualization

The most insightful data analysis is useless unless you can effectively communicate your results. The art of visualization has a long history, and while being one of the most qualitative aspects of data science its methods and tools are well documented.

Courses

Books

Tutorials

Tools

  • D3.js: Data-Driven Documents — Declarative manipulation of DOM elements with data dependent functions (with Python port).

  • Vega: A visualization grammer built on top of D3 for declarative visualizations in JSON. Released by the dream team at Trifacta, it provides a higher level abstraction than D3 for creating “ or SVG based graphics.

  • Rickshaw: A charting library built on top of D3 with a focus on interactive time series graphs.

  • modest maps: A lightweight library with a simple interface for working with maps in the browser (with ports to multiple languages).

  • Chart.js: Very simple (only six charts) HTML5 “ based plotting library with beautiful styling and animation.

Computing at Scale

When you start operating with data at the scale of the web (or greater), the fundamental approach and process of analysis must change. To combat the ever increasing amount of data, Google developed the MapReduce paradigm. This programming model has become the de facto standard for large scale batch processing since the release of Apache Hadoop in 2007, the open-source MapReduce framework.

Courses

Books

Putting it all together

Data Science is an inherently multidisciplinary field that requires a myriad of skills to be a proficient practitioner. The necessary curriculum has not fit into traditional course offerings, but as awareness of the need for individuals who have such abilities is growing, we are seeing universities and private companies creating custom classes.

Courses

Books

Tutorials

Conclusion

Now this just scratches the surface of the infinitely deep field of Data Science and we encourage everyone to go out and try your hand at some science! We would love for you to join the conversation over @zipfianacademy and let us know if you want to learn more about any of these topics.

Blogs

  • Data Beta: Professor Joe Hellerstein’s blog about education, computing, and data.

  • Dataists: Hilary Mason and Vince Buffalo’s old blog that has a wealth of information and resources about the field and practice of data science.

  • Five Thirty Eight: Nate Silver’s famous NYT blog where he discusses predictive modeling and political forecasts.

  • grep alex: Alex Holmes’s blog about distributed computing and the intricacies of Hadoop.

  • Data Science 101: One man’s personal journey to becoming a data scientist (with plenty of resources)

  • no free hunch: Kaggle’s blog about the practice of data science and its competition highlights.

Resources

via zipfianacademy