How to Set Up Your Python Environment on a Mac

In this post I’m going to explain how to set up a working Python environment for Scientific Computing on macOS. That includes NumPy, SciPy, Matplotlib, etc. but also some more advanced tools and even a framework for Machine Intelligence and Deep Learning called TensorFlow.

Before we get started, I want to briefly give you a little bit of background so that you actually understand what you are doing and why you are doing it. Personally, I always prefer explanations that deliver insights to articles that provide no context at all and merely have me copy code snippets from some random website I found on the internet. If something isn’t working on my specific machine, I enjoy being able to fix the problem (because I understand what’s causing it) rather than having to look for a different solution. If you don’t care about the additional information and just want to know how to get Python up and running as quickly as possible, you can skip the following section and jump directly to the relevant part.

Many Roads Lead to Rome

There are multiple different ways to set up a working Python environment. The most convenient one is to install a Python distribution such as Anaconda or Enthought Canopy. This will install not only the Python interpreter itself but also over 100 of the most popular packages for scientific computing. With this setup you are very well positioned for your upcoming Data Science projects. One point of criticism regarding this approach is that, while it is very beginner-friendly, you might never need most of these packages—especially if you’re only dipping your toes in the water. This plethora of packages thus will only, unnecessarily, occupy disk space. If that’s a matter of concern and you’d rather have more control over what will be installed and what not, there’s an alternative, more traditional approach.

In this more traditional approach, you would use pip, a package manager for Python packages, to individually install exactly the packages you need (and nothing more) in combination with a tool called Homebrew to get the Python interpreter and pip in the first place. Since Anaconda is around only since 2012 and Enthought Canopy only since 2013, that traditional approach was—as implied—the way to go before these distributions existed. The creator of Anaconda, Travis Oliphant, didn’t come out of nowhere though. He is also the creator of SciPy, NumPy, and many more Python packages for scientific computing. He had been involved in the Python world since 1998 and became unhappy with pip so he wrote his own package manager. If you read his story why he did it, you will accept that conda is so much better than pip. But humans are creatures of habit; they’ve always used pip and hadn’t known anything else, so I guess that’s why people still recommend the pip approach even today.

Using Anaconda has many, many advantages over the pip approach. Aside from its beginner-friendly installation, one of its major advantages is its easy code reproducibility. Anaconda lets you easily share your environment with others, e.g. your co-workers or an advisor. This means that you can make your exact same environment available to other researchers, i.e. they don’t have to install the required dependencies and configure the environment themselves; they just adopt yours! Never encounter the awkward “Well, it worked on my machine…” again. Imagine reading a paper or attending some conference or meetup and being able to instantaneously try out the code on your own machine, simply by adopting the author’s/speaker’s environment rather than having to investigate yourself which dependencies you need to install before you can get started.

To manage all said dependencies, Anaconda comes with a package manager called conda. In contrast to pip, this package manager is language-agnostic; that means it works not only for Python packages but for all sorts of languages, say R, Go, Lua, Julia, or Scala. This can be useful if a colleague is using a different language than you are. Even if you’re working alone but want to use multiple languages within the same project, conda can help you.

Anaconda’s package manager is also better than pip at packaging applications along with their required libraries. This might not be relevant to you right now, but it will be as soon as you become more experienced and want to publish your data together with your code so that others may verify your findings and build upon them.

And if you’re still arguing that you don’t need all the packages that come with Anaconda, there’s a lightweight variant of Anaconda called Miniconda which installs only the essential packages but not the slew of packages that come with the full distribution of Anaconda. Miniconda is what we’re going to install in this guide. I personally am using it and so is Sebastian Raschka, author of a very popular book about Python. Don’t be fooled by its name though—Miniconda is not a “little brother” or less powerful than Anaconda. It has all the benefits of Anaconda yet installs only the most fundamental packages (hence the name). It leaves you the choice which packages you actually want to install and thus combines the best of the two different approaches.

The downside of installing your packages manually is that you need to educate yourself which packages you’re going to need to install for your project. Imagine a package manager like an app store without a graphical user interface. Unlike the app store on your smartphone, there are no curated categories or “Best Of” lists you can browse to explore new apps. To find and download a package, you first need to have heard about it and know its name before you can—just like on your smartphone—type the name into the search bar of the app store. For beginners who don’t even know where to start this can be quite intimidating if they have no idea what to search for. That’s why I wanted to write this guide for the less experienced programmers among us.

Installing Homebrew

Either way, the very first thing you need to do is to install Homebrew if you haven’t already. In case you don’t know how to do that, I covered Homebrew in-depth in a previous blog post. It would be really helpful if you read that article first, since Homebrew comes in very handy for almost any software you’ll need in your developer career and you can achieve many cool things with it if you understand how to use it.

The Preferred Approach Using Miniconda

If you want to set up your Python environment via Miniconda like I did, you can get Miniconda as follows (after you’ve installed Homebrew):

brew install wget
rm ~/
echo export PATH='~/miniconda3/bin:$PATH' >> ~/.profile
source ~/.profile
conda install anaconda
conda update --all

That’s it! Now you have a working Python environment. It really is that simple.

In case the installation didn’t work on your machine or you’re just curious about these ~/.profile, ~/.bash_profile and ~/.bashrc files, I highly recommend checking out my other blog post covering these files called dotfiles. In that post I explain which of these files is the best to use in the code snippet from above. Since ~/.profile won’t be read in case you already have a ~/.bash_profile file and you don’t explicitly source it, you should go read that post if you’re having trouble installing Python!

If you need more packages, e.g. TensorFlow, you can install them like so:

conda install tensorflow

The packages on Anaconda’s default channel are not always as up to date as possible, so it’s a good idea to tell Anaconda to also search at different places by adding additional channels. You have to do this only once. The package manager will from then on automatically choose the channel that has the newest version of the requested package. Add the conda-forge channel, an entirely community-led channel, like so:

conda config --add channels conda-forge

If you still can’t find the package you’re looking for (i.e. it’s not available on neither channel), you can always fall back to using pip. Remember, pip is the Python-specific package manager I was talking about earlier. We didn’t use it in this approach, since we prefer conda. But since pip is included in the Python distribution installed by Anaconda/Miniconda, you can use it to install packages that aren’t available via conda.

To keep your locally installed packages up to date, you need to regularly run the following command:

conda update --all

You can view this cheat sheet to see all other conda commands.

By now, you’ve set up a full-fledged Python environment. Still, I recommend you to read the alternative approach using Homebrew and pip. As you saw, you can use pip im combination with conda and a lot of pip’s principles apply to conda too (e.g. virtual environments).

The Traditional Approach Using Pip and Homebrew

In case you neither want to use Anaconda nor Miniconda but want to solely use pip (in combination with Homebrew), the tutorial for you begins here.

As you may know, Apple has already pre-installed Python on your system by default. However, you definitely shouldn’t use that version and should install your own version of Python instead. This has several reasons:

  • Apple’s Python distribution is outdated.
  • Apple itself recommends installing your own version.
  • Upgrading macOS—say from El Capitan to Sierra—can wipe the packages we’re going to install, forcing you to re-install everything after the next OS update.
  • Apple’s Python distribution does not include pip.
  • This site and similar ones give even more reasons. But they seem to just copy from one another and no one ever provides sources, so I didn’t fact-check their arguments.

Nevertheless, these arguments hopefully convinced you to not use the system-provided distribution of Python but rather install the standard distribution. There are two different versions of Python available: Python 2 and Python 3. You should definitely use Python 3.

The only reason Python 2 is still around is because of compatibility reasons. It really shouldn’t be used anymore, because it’s that outdated. It’s end of life was originally scheduled for 2015 and developers were given enough time to make their software compatible with Python 3 by that date. Most developers did, but some didn’t. So, because of a few old projects, support had to be extended until 2020. You really shouldn’t contribute to creating new software incompatible with Python 3 by using the outdated Python 2 for your new projects. If you genuinely have the need for it, you can of course install both versions side by side. But I guess every relevant software complies with Python 3 by now, so there’s no reason for Python 2 anymore.

Install Python 3 like so:

brew install python3

Along with Python comes the newest version of OpenSSL so that Python can be compiled against that version of OpenSSL instead of the system-provided one. The pre-installed version of OpenSSL shouldn’t be used for similar reasons as above. To check whether the right version of OpenSSL is being used, perform this command:

python3 -c "import ssl; print(ssl.OPENSSL_VERSION)"

The resulting output on your terminal should present you a reasonably current version of OpenSSL newer than version 1.0.2 (anything else is out of support).

Still quite often, you may stumble upon the PYTHONPATH variable and other people telling you to add it to your ~/.profile file. This is definitely not necessary anymore and you shouldn’t do it. If you did set it already, you should remove it. The PYTHONPATH is a relict of the past when it was used for switching between different Python installations as well as for importing Python modules. But that’s exactly what Homebrew is doing for us, so there’s no reason left at all to set PYTHONPATH.1

Updating Pip

Alright, now that you’ve installed the basic Python interpreter we’re going to install a few so-called packages in order to add some cool new features to the core functionality of Python. Not all packages are available in Homebrew, so we first need a second package manager explicitly for Python packages. There are several ones available, but the best one is called pip. Conveniently, you have pip already installed, since it comes bundled with the Python distribution. You need to update it though—together with some other things—to their newest versions:

pip3 install --upgrade pip setuptools wheel

Because we’re using Python 3, you need to type pip3 instead of pip.

Important Notes

Should you ever see something like sudo easy_install pip—don’t do it! This command is a leftover from earlier versions of Python, when pip wasn’t yet integrated in Python and you had to install pip yourself.2 You don’t need this command anymore.

Furthermore, you should never use pip in combination with sudo. This has always been a bad practice for several reasons:

  • In case you were using the Apple provided Python distribution instead of your own installation, packages were installed to /Library/Python/2.7/site-packages. Being a system folder, you needed sudo permission to write into it. But you never should’ve used Apple’s distribution in the first place.
  • Under certain special circumstances, pip did mess up when used with Apple’s distribution and confused the just mentioned folder /Library/Python/2.7/site-packages (meant for the packages installed by the user) with the folder for the packages pre-installed by Apple located at /System/Library/Frameworks/Python.framework/Versions/2.7/. Since macOS 10.11 El Capitan introduced a new security feature called System Integrity Protection it is no longer possible to write into /System/Library/.../ (not even with sudo).3 In this scenario, using sudo simply wouldn’t have any effect, since it is not allowed to write into this very special folder. Again, Apple’s Python distribution should’ve never been used in the first place.
  • Even if you were using Apple’s distribution, you rather should’ve installed your packages in a virtual environment than mess up the system’s Python framework. This wouldn’t have required sudo either.
  • Furthermore, installing packages with root permission isn’t a particularly good idea, even if you downloaded them from a trustworthy source. They could either accidentally or—even worse—purposely mess up your system, thus leading to a defective or unreliable Python environment.
  • When using the Python distribution from Homebrew, pip installs its packages to /usr/local/.../ (e.g. /usr/local/lib/python3.5/site-packages) which is a safe place to write into and therefore doesn’t need sudo permission. So when sudo permission isn’t even necessary, then what’s the point of using it anyway—only risking to accidentally cause problems which totally could’ve been avoided.

Installing Virtualenv (Optional)

If you’re only working on one project at a time—which is what you’re probably doing in the beginning—this step is totally optional. As a beginner, you really don’t need virtualenv for scientific computing. It’s perfectly fine to skip this step and revisit it once you’re working with multiple projects simultaneously.

With virtualenv you can manage your packages per project rather than globally. For example: one of your web development projects could need the latest version of Django, while another project relies on a very specific, older version of Django for compatibility reasons. By using virtualenv you can separate installed packages and even different versions of packages from each other. Since Python’s own internal package dependency system is very complicated and not particularly easy to understand, virtualenv is a huge simplification and may become very important. With virtualenvwrapper you can make working with virtualenv even easier, since it sets smart defaults and aliases for frequently used commands.

Install the packages like so:

pip3 install virtualenv virtualenvwrapper

Next create a new folder in your home directory where you’re going to store all your Python projects and name it something like “Code” or “Projects” (I’m going with “Projects”). You could create the folder using the terminal:

mkdir ~/Projects

Be sure that you’ve set the PATH variable. Then perform the next commands. Also ensure to use the same name for PROJECT_HOME that you’ve given the folder you just created. Setting PROJECT_HOME is totally optional but will turn out as a very convenient timesaver.

echo '# needed for virtualenvwrapper' >> ~/.profile
echo export 'WORKON_HOME=$HOME/.virtualenvs' >> ~/.profile
# replace Projects with the name you gave your folder
echo export 'PROJECT_HOME=$HOME/Projects' >> ~/.profile
echo export VIRTUALENVWRAPPER_PYTHON=/usr/local/bin/python3 >> ~/.profile
echo export VIRTUALENVWRAPPER_VIRTUALENV=/usr/local/bin/virtualenv >> ~/.profile
echo export PIP_REQUIRE_VIRTUALENV=true >> ~/.profile
echo source /usr/local/bin/ >> ~/.profile

If you’ve set PIP_REQUIRE_VIRTUALENV to true, this line will prevent you from accidentally installing packages globally. Meaning, from now on you can only install packages when you’re working on a virtual environment. If you don’t like this behavior, simply omit that line or set it to false.

Then either close the terminal and re-open it or reload just your ~/.profile file via

source ~/.profile

Now you’re ready to create your first virtual environment.

I find virtualenv much more useful for web development though, where your projects demand for different versions like in the mentioned example. For scientific computing however, you’ll probably want each of your projects to always have the latest versions of their packages. You could however create a virtual environment for scientific purposes to which you switch to whenever you’re not working on web projects.

To create a virtual environment, use the mkvirtualenv command and give it a meaningful name, e.g. “science” for the above mentioned environment for scientific computing:

mkvirtualenv science

You can create many more virtual environments this way and switch between them with the workon command and their respective names, e.g. workon science.

If you’re done with a specific virtual environment, you can leave it with the command deactivate.

To delete a virtual environment, simply enter rmvirtualenv followed by the name of the virtual environment you want to delete.

Regarding the time saver: since we’ve created the ~/Projects folder and set the PROJECT_HOME variable, we’re able to start an entire new project with a single line of code. Examples:

mkproject my-personal-homepage


mkproject client-homepage

This command implicitly creates an associated virtual environment for new projects. We don’t need to perform mkvirtualenv anymore but can immediately switch to the virtual environment with workon my-personal-homepage.

Now let’s say you’re ready to make the switch and to work exclusively with virtual environments. So far, however, you’ve installed each and every package globally. How do you move all your existing packages into your newly created but empty science environment? There are several options:

  • One possibility is to grant your science environment access to your globally installed packages. You would do this by entering toggleglobalsitepackages into your shell while you’re working on the virtual environment of choice. Then you would be able to leave the restricting cage of your virtual environment, somewhat defeating the whole purpose of a virtual environment. Additionally, this method can become quite messy once you’ve installed a few more packages, because from then on the packages will be located in two different folders.
  • The second possibility is to uninstall everything pip-related and start fresh. You would, however, have to install all packages again which is inconvenient. To uninstall the existing packages, you could use one these commands which both do exactly the same:
    • pip3 freeze | xargs pip3 uninstall -y
    • pip3 list | awk '{print $1}' | xargs pip3 uninstall -y
  • A third option would be to first copy your existing packages into your science environment before you uninstall all packages, except for virtualenv and virtualenvwrapper of course. Otherwise, how would you create new virtual environments without these packages? :stuck_out_tongue_winking_eye:
Copying Installed Packages Into a Virtual Environment

If you chose the third possibility, read on. Otherwise continue to the next section.

First deactivate the science environment. Then use the following command to create a text file which contains a list with the names of each of your installed packages, together with their respective version number.

pip3 freeze > ~/Desktop/requirements.txt

You’ll find this newly created text file at your desktop. Open it with any text editor or use this command to open it for you:

open -e ~/Desktop/requirements.txt

Remove virtualenv and virtualenvwrapper from that list and hit save, since those two packages will be the only two packages which will remain being installed globally.

Next, uninstall every package except virtualenv and virtualenvwrapper like so:

pip3 uninstall -r ~/Desktop/requirements.txt

If there’s any package you don’t want to get installed again, open the text file again and remove those packages.

Now activate your science environment via workon science and install the remaining packages from your list into your virtual environment:

pip3 install -r ~/Desktop/requirements.txt

When you’re done and you don’t need the text file for further virtual environments, delete it—either manually or like so:

rm ~/Desktop/requirements.txt

Installing Qt and PyQt

Qt is a popular toolkit typically being used for GUI’s in C++ applications. But we can make use of it for Python applications too, if we install an additional Python binding called PyQt.

The Qt framework and the PyQt binding are prerequisites for IPython, which we will install later on. IPython is a significant enhancement to the Python console. SIP is just another required dependency in order to use IPython.

brew install qt5
brew install sip --with-python3
brew install pyqt5
Installing Qt Creator for GUI Development (Optional)

So far we’ve only installed the libraries to make GUI development possible (and of course the usage of applications built on the Qt framework like IPython). If you’d like to actually create a GUI for your Python app, you’ll need the Qt Designer which is now integrated into the Qt Creator.

brew cask install qt-creator

In case you intend to develop a GUI for your application, you need a full installation of Xcode, not just the Xcode Command Line Tools. In order to install Qt Creator, the IDE necessary to develop applications with a Qt GUI, the Xcode Command Line Tools alone won’t be sufficient. Simply download Xcode from the Mac App Store and you’re good to go.

Installing the SciPy Stack

brew install pkg-config libpng freetype
pip3 install numpy scipy matplotlib pandas sympy nose

The SciPy Stack consists of: NumPy, SciPy, Matplotlib, IPython (I’ll come back to this in more detail later on), pandas, Sympy, and nose. The command to install these programs is self-explanatory. Just install NumPy first, because the rest builds up on it.

In order to compile matplotlib, you first need to install pkg-config, libpng and freetype if they aren’t installed already. They are required dependencies for configuring Matplotlib when it is being compiled as well as for manipulating PNG image files and rendering fonts (i.e. for displaying text in your plots).

Bear in mind that you should install packages either via Homebrew or via pip but never via both. Best practice is to generally prefer pip to brew for Python-specific packages and to use Homebrew only if the desired package couldn’t be installed via pip (because it’s general purpose and not Python-specific). In most cases, pip will give you a newer version than brew and works better with virtualenv.


In case you’ll get an error message saying something about scipy couldn’t be installed because gfortran was missing, you would need to install the “real” gcc. The reason is the following: gfortran is part of GCC and GCC in turn is part of the Xcode Command Line Tools. But even though the Xcode Command Line Tools contain a version of gcc, it isn’t actually GCC but a disguised Clang.4 Usually this version is perfectly fine and you really shouldn’t encounter any problems.

Should you nonetheless have the above mentioned problem, simply install gcc. First search for its newest version via brew search gcc and then install it with the brew install command. In Homebrew vocabulary: This will pour a bottled (i.e. pre-compiled) version of gcc. Translation: someone else has already done the work of compiling for you (likely on a much faster machine) and you can just download the finished result. In some cases this doesn’t work quite right however and your computer will start the work of compiling gcc itself. This will take a very, very long time. Your MacBook will probably get very hot and very loud in the process (because all the fans will be spinning with maximum speed). This is normal and due to the tremendous amount of CPU power it takes to build gcc. If you accidentally started the installation process or are too worried about the heat, you can abort the process at any time without any problems.

Installing IPython

IPython is a very powerful interactive shell for Python and the de-facto standard for scientific computing. It is much better than the interactive mode of Python. You use it by entering ipython in your Terminal instead of python3.

Yet another enhancement on top of IPython is Qt Console. It adds a GUI to ipython and therefore provides even more features that wouldn’t be possible without a GUI—such as inline figures, proper multiline editing with syntax highlighting, graphical calltips, and much more. With Qt Console the shell becomes so powerful you don’t even need an IDE anymore. By now you need to have Qt/PyQt installed, so if you skipped that step earlier, you should go back and catch up on it. You can then install everything you need using the following line of code:

pip3 install jupyter

You may be wondering why it says Jupyter. IPython became the Project Jupyter in 2014 and thus the command to install IPython and all of its subprojects was renamed. Before that, you had to install IPython with a different command which is now deprecated, but I will list it anyway so that you know it and don’t use it:

pip3 install ipython[all]

The [all] parameter automatically installed all the main optional dependencies like PyZMQ, Pygments, Jinja, Tornado or MathJax (needed for Qt Console and Jupyter Notebook).5 Otherwise you would’ve needed to install them individually. But as I said, this command was replaced by the above mentioned pip3 install jupyter and is not used anymore.

To test whether the installation worked out, you can either try to open the Qt Console

jupyter qtconsole

or run IPython’s test suite:

Jupyter Notebook

Jupyter Notebook is also part of the Jupyter Project and was formally known as IPython Notebook. It combines your data analysis tool with a word processor. Jupyter Notebook let’s you create text files that contain embedded executable Python code. This way, you can easily annotate your Python code with Latex code and present your results to other people. Gone are the days when you had to copy your results from Matlab, Maple, Excel etc. to Microsoft Word and constantly switch between applications.

First you cd (change directory) to the directory where you want to store your text files or open them from, and then simply open Jupyter Notebook like this:

jupyter notebook

This will start a local web server. To view the Notebook Dashboard, open http://localhost:8888 in your web browser (it’s a web application).

In order to be able to convert notebooks to various formats other than HTML (e.g. PDF), you’ll need to install Pandoc (a dependency for nbconvert):

pip3 install pandoc
Customizing IPython and Qt Console (optional)

You can even customize the Qt Console if you want to and use a better font like the popular Source Code Pro from Adobe after you’ve installed it via Homebrew Cask:

brew tap caskroom/fonts
brew cask install font-source-code-pro
jupyter qtconsole --ConsoleWidget.font_family="Source Code Pro" --ConsoleWidget.font_size=14

Installing TensorFlow

TensorFlow is a library for Machine Intelligence. With TensorFlow you can implement popular machine learning algorithms, specifically deep learning algorithms. It was developed by the Google Brain Team (which was founded by the highly renowned Andrew Ng) and then open-sourced. TensorFlow has become the leader in the space and quickly overtook Scikit-learn in popularity—another tool collection for machine learning. One of the primary reasons is that Scikit-learn lacks GPU support.

The New Way

You should be able to install TensorFlow like so:

pip3 install tensorflow

I say “should” because TensorFlow was added to PyPI (the “App Store” for pip) comparatively recently. For whatever reason the installation through pip does not always work. If that’s the case for you—don’t worry. You can still install TensorFlow “the old way”.

The Old Way

First check out TensorFlow’s Installing TensorFlow on Mac OS X site to find out the correct download link. There’s a “CPU only” version and a “GPU enabled” version available. To make use of GPU support in TensorFlow you’ll need a Graphics Card with CUDA support which are Nvidia cards only. So unless you have a discrete graphics card with a Nvidia GPU, go for the “CPU only” version.

You could copy the export-command from their page and manually paste it into your ~/.profile file—or do it much quicker like so:

# please check the website first if there's a newer URL available
echo export TF_BINARY_URL= >> ~/.profile
pip3 install --upgrade $TF_BINARY_URL

Whichever method you choose, remember to not use sudo (contrary to what the TensorFlow website says). It’s not necessary for the reasons mentioned above. On the contrary. Using sudo will cause an error message. The reason for this error message is that pip uses caching by default (just like web browsers are caching the sites you’ve already visited). In this particular case, caching means writing write into ~/Library/Caches/pip and ~/Library/Caches/pip/http. But the root user cannot write into either of them, since he doesn’t own your home folder. The caching of the package will fail, thus you’ll get an error message informing you about that. Omitting sudo—therefore installing TensorFlow as regular user—will solve the caching problem. If you’re still having trouble, retry with pip3 install --user instead of sudo pip3 install. Should you not need caching, instead of misusing sudo rather use the --no-cache-dir option.

Updating Packages

Updating Python packages with pip is still a little bit tiresome, since you can’t update packages all at once like it is possible with conda but have to update each package individually. You should nevertheless do this regularly to always work with the latest versions. To update your packages, you first need to find out which ones are outdated:

pip3 list --outdated

Then you can update them by listing their names as parameters behind the --upgrade flag (or its shortcut -U), just like when you updated pip itself earlier in this guide:

pip3 install -U package1 package2 package3 ...

Final Words

I gave my best to explain everything as beginner-friendly as possible, since the authors of all the other articles I found when I was learning this stuff were assuming I already knew what the PATH variable was or that I had a deep understanding of the UNIX file system. Thus I spent a long time reading on Stack Overflow etc., and had to learn this all on my own the hard way. It would make me really happy if I can spare you this effort. If you found this article helpful or still have any questions, why don’t you leave a comment down below? It would be greatly appreciated :blush:

  1. Even if, then PATH would be a way more appropriate place than PYTHONPATH. But since PATH already includes the location of the Python interpreter, /usr/local/bin, setting PYTHONPATH is absolutely unnecessary. 

  2. Heck, pip didn’t even exist yet. In these times, the package manager of choice was called easy_install. And when the much better pip was presented to the Python community, you had to use your current package manager to install another package manager :smiley: Only as soon as Python 2.7.9 respectively Python 3.4 were available, pip had been integrated in the Python distribution which made easy_install obsolete. 

  3. For further information see 

  4. Meanwhile, many companies like Apple or Google are preferring Clang to GCC as their compiler front end. To make the transition smoother, Apple symlinks Clang executables with GCC-like names. Under very specific circumstances (e.g. certain GCC parameters Clang doesn’t know), this can lead to errors. 

  5. With PyZMQ being only a Python binding for ZeroMQ (the messaging library behind it), you’d also need zmq for PyZMQ to work. However, the setup routine for PyZMQ is intelligent enough to install zmq by itself. Thus there’s no need for you to first install zmq via Homebrew.