We are Good!

    TLDR; You can donate easily here

    This goes directly to Kerala Chief Minister's Distress Relief Fund. It's easy to use and completely tax exempt. It also welcomes foreign donations.

    Relieved to share that most of my family and friends have survived the terrible Kerala floods which has been ravaging for days. Of course, everything is not okay. There is a long and painful road ahead to try rebuild what we have lost. But then there are some losses we might never recover from.

    Many friends, colleagues and relatives have been calling from all over the world. Their reactions range from “maybe global warming is real because this is happening everywhere” to “humanity is doomed”. Of course, I am ignoring silly reactions like “must be wrath of god” or “is it fake news”.

    Everything these days is seen as a sign of humanity’s impending DOOM. We have sort of turned hollow. Everything seems to be about money or politics. People are supposed to have no other motive or emotion. Forget a soul.

    I think we need to remember one really important thing – it is not always about the money. Any species be it animal, bird or insect will help another if they see one of their kind is in trouble.

    If an old man collapsed on the road clutching his chest, people wouldn’t first ask which state/country/religion he belongs to before trying to help him. The day we do then trust me we won’t need global warming or even killer robots. Humanity is doomed. Both metaphorically and literally.

    Thankfully there is hope. In times of this crisis, I’ve been in constant touch with my friends who are helping in any way possible. Those who can drive are picking up total strangers. Those who can code are rapidly building resilient applications to coordinate rescue. Those who can cook are preparing thousands of food packets and infant food. Even those who cannot afford four square meals a day are donating whatever they can.

    Even if there is even one human being somewhere thanking you for your act of kindness, doesn’t it mean something? Isn’t that what the ultimate purpose of our life is - to be useful to someone? To make a difference?

    There is an overwhelming humanity around us. And that’s why we will survive. Together.

    PS: Please donate wholeheartedly.

    Comments →

    Understanding Django Channels

    You are reading a post from a two-part tutorial series on Django Channels

    Django Channels

    Django Channels was originally created to solve the problem of handling asynchronous communication protocols like say WebSockets. More and more web applications were providing realtime capabilities like chat and push notifications. Various workarounds were created to make Django support such requirements like running separate socket servers or proxy servers.

    Channels is an official Django project not just for handling WebSockets and other forms of bi-directional communication but also for running background tasks asynchronously. As of writing, Django Channels 2 is out which is a complete rewrite based on Python 3’s async/await based coroutines.

    This article covers the concepts of Django Channels and leads you to a video tutorial implementing a notifications application in Django.

    Here is a simplified block diagram of a typical Channels setup:

    Django Channels

    How a typical Django Channels infrastructure works

    A client such as a web browser sends both HTTP/HTTPS and WebSocket traffic to an ASGI server like Daphene. Like WSGI, the ASGI (Asynchronous Server Gateway Interface) specification is a common way for applications servers and applications to interact with each other asynchronously.

    Like a typical Django application, HTTP traffic is handled synchronously i.e. when the browser sends a request, it waits until it is routed to Django and a response is sent back. However, it gets a lot more interesting when WebSocket traffic happens because it can be triggered from either direction.

    Once a WebSocket connection is established, a browser can send or receive messages. A sent message reaches the Protocol type router that determines the next routing handler based on its transport protocol. Hence you can define a router for HTTP and another for WebSocket messages.

    These routers are very similar to Django’s URL mappers but map the incoming messages to a consumer (rather than a view). A consumer is like an event handler that reacts to events. It can also send messages back to the browser, thereby containing the logic for a fully bi-directional communication.

    A consumer is a class whose methods you may choose to write either as normal Python functions (synchronous) or as awaitables (asynchronous). Asynchronous code should not mix with synchronous code. So there are conversion functions to convert from async to sync and back. Remember that the Django parts are synchronous. A consumer in fact a valid ASGI application.

    So far, we have not used the channel layer. Ironically, you can write Channel applications without using Channels! But they are not particularly useful as there is no easy communication path between application instances, other than polling a database. Channels provides exactly that, a fast point-to-point and broadcast messaging between application instances.

    A channel is like a pipe. A sender sends a message to this pipe from one end and it reaches a listener at the other end. A group defines a group of channels who are all listening to a topic. Every consumer listens to own auto-generated channel accessed by its self.channel_name attribute.

    In addition to transports, you can trigger a consumer listening to a channel by sending a message, thereby starting a background task. This works as a very quick and simple background worker system.

    Building a Channels Application Step-by-step

    The following screencast covers the creation of a notification application using Django Channels. You can access the code on Github. The intermediate projects like the Echo Consumer can be accessed as branches of the git repository.

    The show notes can be accessed on Github.

    Check out the video tutorial and let me know if you found it useful! To know more, read the Django Channels documentation.

    This article contains an excerpt from "Django Design Patterns and Best Practices" by Arun Ravindran

    Comments →

    Get started with Async & Await

    You are reading a post from a two-part tutorial series on Django Channels


    Asyncio is a co-operative multitasking library available in Python since version 3.6. Celery is fantastic for running concurrent tasks out of process, but there are certain times you would need to run multiple tasks in a single thread inside a single process.

    If you are not familiar with async/await concepts (say from JavaScript or C#) then it involves a bit of steep learning curve. However, it is well worth your time as it can speed up your code tremendously (unless it is completely CPU-bound). Moreover, it helps in understanding other libraries built on top of them like Django Channels.

    This post is an attempt to explain the concepts in a simplified manner rather than try to be comprehensive. I want you to start using asynchronous programming and enjoy it. You can learn the nitty gritties later.

    All asyncio programs are driven by an event loop, which is pretty much an indefinite loop that calls all registered coroutines in some order until they all terminate. Each coroutine operates cooperatively by yielding control to fellow coroutines at well-defined places. This is called awaiting.

    A coroutine is like a special function which can suspend and resume execution. They work like lightweight threads. Native coroutines use the async and await keywords, as follows:

    import asyncio
    async def sleeper_coroutine():
        await asyncio.sleep(5)
    if __name__ == '__main__':
        loop = asyncio.get_event_loop()

    This is a minimal example of an event loop running one coroutine named sleeper_coroutine. When invoked this coroutine runs until the await statement and yields control back to the event loop. This is usually where an Input/Output activity occurs.

    The control comes back to the coroutine at the same line when the activity being awaited is completed (after five seconds). Then then coroutine returns or is considered completed.

    Explain async and await

    [TLDR; Watch my screencast to understand this section with a lot more code examples.]

    Initially, I was confused by the presence of the new keywords in Python: async and await. Asynchronous code seemed to be littered with these keywords yet it was not clear what they did or when to use them.

    Let’s first look at the async keyword. Commonly used before a function definition as async def, it indicates that you are defining a (native) coroutine.

    You should know two things about coroutines:

    1. Don’t perform slow or blocking operations synchronously inside coroutines.
    2. Don’t call a coroutine directly like a regular function call. Either schedule it in an event loop or await it from another coroutine.

    Unlike a normal function call, if you invoke a coroutine its body will not get executed right away. Instead it will be suspended and returns a coroutine object. Invoking the send method of this coroutine will start the execution of the coroutine body.

    >>> async def hi():
    ...     print("HOWDY!")
    >>> o = hi()
    >>> o
    <coroutine object hi at 0x000001DAE26E2F68>
    >>> o.send(None)
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>

    However, when the coroutine returns it will end in a StopIteration exception. Hence it is better to use the asyncio provided event loop to run a coroutine. The loop will handle exceptions in addition to all other machinery for running coroutines concurrently.

    >>> import asyncio
    >>> loop = asyncio.get_event_loop()
    >>> o = hi()
    >>> loop.run_until_complete(o)

    Next we have the await keyword which must be only used inside a coroutine. If you call another coroutine, chances are that it might get blocked at some point, say while waiting for I/O.

    >>> async def sleepy():
    ...     await asyncio.sleep(3)
    >>> o = sleepy()
    >>> loop.run_until_complete(o)
    # After three seconds

    The sleep coroutine from asyncio module is different from its synchronous counterpart time.sleep. It is non-blocking. This means that other coroutines can be executed while this coroutine is awaiting the sleep to be completed.

    When a coroutine uses the await keyword to call another coroutines, it acts like a bookmark. When a blocking operation happens, it suspends the coroutine (and all the coroutines who are await-ing it) and returns control back to the event loop. Later, when the event loop is notified of the completion of the blocking operation, then the execution is resumed from the await expression paused and continues onward.

    Asyncio vs Threads

    If you have worked on multi-threaded code, then you might wonder – Why not just use threads? There are several reasons why threads are not popular in Python.

    Firstly, threads need to be synchronized while accessing shared resources or we will have race conditions. There are several types of synchronization primitives like locks but essentially, they involve waiting which degrades performance and could cause deadlocks or starvation.

    A thread may be interrupted any time. Coroutines have well-defined places where execution is handed over i.e. co-operative multitasking. As a result, you may make changes to a shared state as long as you leave it in a known state. For instance you can retrieve a field from a database, perform calculations and overwrite the field without worrying that another coroutine might have interrupted you in between. All this is possible without locks.

    Secondly, coroutines are lightweight. Each coroutine needs an order of magnitude less memory than a thread. If you can run a maximum of hundreds of threads, then you might be able to run tens of thousands of coroutines given the same memory. Thread switching also takes some time (few milliseconds). This means you might be able to run more tasks or serve more concurrent users (just like how Node.js works on a single thread without blocking).

    The downsides of coroutines is that you cannot mix blocking and non-blocking code. So once you enter the event loop, rest of the code driven by it must be written in asynchronous style, even the standard or third-party libraries you use. This might make using some older libraries with synchronous code somewhat difficult.

    If you really want to call asynchronous code from synchronous or vice versa, then do read this excellent overview of various cases and adaptors you can use by Andrew Godwin.

    The Classic Web-scraper Example

    Let’s look at an example of how we can rewrite synchronous code into asynchronous. We will look at a webscraper which downloads pages from a couple of URLs and measures its size. This is a common example because it is very I/O bound which shows a significant speedup when handled concurrently.

    Synchronous web scraping

    The synchronous scraper uses Python 3 standard libraries like urllib. It downloads the home page of three popular sites and the fourth is a large file to simulate a slow connection. It prints the respective page sizes and the total running time.

    Here is the code for the synchronous scraper:

    # sync.py
    """Synchronously download a list of webpages and time it"""
    from urllib.request import Request, urlopen
    from time import time
    sites = [
    def find_size(url):
        req = Request(url)
        with urlopen(req) as response:
            page = response.read()
            return len(page)
    def main():
        for site in sites:
            size = find_size(site)
            print("Read {:8d} chars from {}".format(size, site))
    if __name__ == '__main__':
        start_time = time()
        print("Ran in {:6.3f} secs".format(time() - start_time))

    On a test laptop, this code took 5.4 seconds to run. It is the cumulative loading time of each site. Let’s see how asynchronous code runs.

    Asynchronous web scraping

    This asyncio code requires installation of a few Python asynchronous network libraries such as aiohttp and aiodns. They are mentioned in the docstring.

    Here is the code for the asynchronous scraper – it is structured to be as close as possible to the synchronous version so it is easier to compare:

    # async.py
    Asynchronously download a list of webpages and time it
    Dependencies: Make sure you install aiohttp using: pip install aiohttp aiodns
    import asyncio
    import aiohttp
    from time import time
    # Configuring logging to show timestamps
    import logging
    logging.basicConfig(format='%(asctime)s %(message)s', datefmt='[%H:%M:%S]')
    log = logging.getLogger()
    sites = [
    async def find_size(session, url):
        log.info("START {}".format(url))
        async with session.get(url) as response:
            log.info("RESPONSE {}".format(url))
            page = await response.read()
            log.info("PAGE {}".format(url))
            return url, len(page)
    async def main():
        tasks = []
        async with aiohttp.ClientSession() as session:
            for site in sites:
                tasks.append(find_size(session, site))
            results = await asyncio.gather(*tasks)
        for site, size in results:
            print("Read {:8d} chars from {}".format(size, site))
    if __name__ == '__main__':
        start_time = time()
        loop = asyncio.get_event_loop()
        print("Ran in {:6.3f} secs".format(time() - start_time))

    The main function is a coroutine which triggers the creation of a separate coroutine for each website. Then it awaits until all these triggered coroutines are completed. As a best practice, the web session object is passed to avoid re-creating new sessions for each page.

    The total running time of this program on the same test laptop is 1.5 s. This is a speedup of 3.6x on the same single core. This surprising result can be better understood if we can visualize how the time was spent, as shown below:

    Comparing scrapers

    A simplistic representation comparing tasks in the synchronous and asynchronous scrapers

    The synchronous scraper is easy to understand. Scraping activity needs very little CPU time and the majority of the time is spent waiting for the data to arrive from the network. Each task is waiting for the previous task to complete. As a result the tasks cascade sequentially like a waterfall.

    On the other hand the asynchronous scraper starts the first task and as soon as it starts waiting for I/O, it switches to the next task. The CPU is hardly idle as the execution goes back to the event loop as soon as the waiting starts. Eventually the I/O completes in the same amount of time but due to the multiplexing of activity, the overall time taken is drastically reduced.

    In fact, the asynchronous code can be speeded up further. The standard asyncio event loop is written in pure Python and provided as a reference implementation. You can consider faster implementations like uvloop for further speedup (my running time came down to 1.3 secs).

    Concurrency is not Parallelism

    Concurrency is the ability to perform other tasks while you are waiting on the current task. Imagine you are cooking a lot of dishes for some guests. While waiting for something to cook, you are free to do other things like peeling onions or cutting vegetables. Even when one person cooks, typically there will be several things happening concurrently.

    Parallelism is when two or more execution engines are performing a task. Continuing on our analogy, this is when two or more cooks work on the same dish to (hopefully) save time.

    It is very easy to confuse concurrency and parallelism because they can happen at the same time. You could be concurrently running tasks without parallelism or vice versa. But they refer to two different things. Concurrency is a way of structuring your programs while Parallelism refers to how it is executed.

    Due to the Global Interpreter Lock, we cannot run more than one thread of the Python interpreter (to be specific, the standard CPython interpreter) at a time even in multicore systems. This limits the amount of parallelism which we can achieve with a single instance of the Python process.

    Optimal usage of your computing resources require both concurrency and parallelism. Concurrency will help you avoid idling the processor core while waiting for say I/O events. While parallelism will help distribute work among all the available cores.

    In both cases, you are not executing synchronously i.e. waiting for a task to finish before moving on to another task. Asynchronous systems might seem to be the most optimal. However, they are harder to build and reason about.

    Why another Asynchronous Framework?

    Asyncio is by no means the first cooperative multitasking or light-weight thread library. If you have used gevent or eventlet, you might find asyncio needs more explicit separation between synchronous and asynchronous code. This is usually a good thing.

    Gevent, relies on monkey-patching to change blocking I/O calls to non-blocking ones. This can lead to hard to find performance issues due to an unpatched blocking call slowing the event loop. As the Zen says, ‘Explicit is better than Implicit’.

    Another objective of asyncio was to provide a standardized concurrency framework for all implementations like gevent or Twisted. This not only reduces duplicated efforts by library authors but also ensures that code is portable for end users.

    Personally, I think the asyncio module can be more streamlined. There are a lot of ideas which somewhat expose implementation details (e.g. native coroutines vs generator-based coroutines). But it is useful as a standard to write future-proof code.

    Can we use asyncio in Django?

    Strictly speaking, the answer is No. Django is a synchronous web framework. You might be able to run a seperate worker process, say in Celery, to run an embedded event loop. This can be used for I/O background tasks like web scraping.

    However, Django Channels changes all that. Django might fit in the asynchronous world after all. But that’s the subject of another post.

    This article contains an excerpt from "Django Design Patterns and Best Practices" by Arun Ravindran

    Comments →

    Interview with Daniel Roy Greenfeld (PyDanny)

    Daniel Roy Greenfeld needs no introduction to Djangonauts. Co-author of the book Two Scoops of Django which is probably on the shelves of any serious Django practitioner. But PyDanny, as he is fondly known as, is also a wonderful fiction author, fitness enthusiast and a lot more.

    Having known Daniel for a while as a wonderful friend and a great inspiration, I am so excited that he agreed to my interview. Let’s get started…

    PyDanny Photo

    How did the idea of writing an ice-cream themed book occur?

    The first 50 pages that I wrote were angry and were for a book with an angry name. You see, I was tired of having to pick up after sloppy or unwise coding practices on rescue projects. I was furious and wanted to fix the world.

    However, I was getting stuck in what I wanted to say, or didn’t know things. I kept asking my normal favorite resource for help, Audrey Roy Greenfeld. Eventually she started to write (or rewrite) whole sections and I realized that I wasn’t writing the book alone.

    Therefore I asked Audrey to be my co-author. She’s a cheerful person and said that if she were to accept, the book had to be lightened. That meant changing the name. After a lot of different title names discussed over many ice cream sessions, we decided to use the subject matter at hand. Which worked out well as the ice cream theme made for a good example subject.

    How do you and Audrey collaborate while writing a book?

    We take turns writing original material that interests us. The other person follows them and acts as editor and proofreader. We go back and forth a few hundred times and there you go.

    For tech writing we use Git as version control and LaTeX for formatting. For fiction we use Google docs followed by some Python scripts that merge and format the files.

    What’s the most exciting recent development in Django? Where do you think it can improve?

    I like the new URL system as it’s really nice for beginners and advanced coders alike. While I like writing regular expressions, I’m the exception in this case.

    Where I think where Django can improve is having more non-US/Europe/Australian representation within the DSF and in the core membership. In short, most of Django core looks like me, and I think that’s wrong. Many of the best Django (and Python) people look nothing like me, and they deserve more recognition. While having Anna Makarudze on the DSF board is a wonderful development, as a community we can still do better in core.

    In the case of Django’s core team, I believe this has happened because all the major Django conferences are in the US, Europe, and Australia, and from what I’ve seen over the years it’s through participation in those events is how most people get onto the Django core team. The DSF is aware of the problem, but I think more people should raise it as an issue. More people vocalizing this as a problem will get it resolved more quickly.

    With the Ambria fantasy series, you have proven to be a prolific fiction author too. Reminds me Lewis Carroll who wrote children’s books and mathematical treatises. What is the difference in the writing process while writing fiction and non-fiction?

    For us, the process is very similar. We both write and we both review our stuff. The difference is that if we make a mistake in our fiction, it’s not as critical. That means that the review process for fiction is a lot easier on us then it is to write technical books or articles. I can’t begin to tell you what a load that is off my shoulders.

    Why fantasy? Any literary influences?

    We like fantasy because we can just let our imaginations run away with us. For the Ambria series, our influences include Tolkien, Joseph Campbell, Glen Cook, Greek mythology, and various equine and religious studies.

    Do you have a daily writing routine?

    Like coding on a fun project, when we get to write, we get up early and just start working. When we get hungry or thirsty we stop. The day seems to fly by and we are very happy. We try not to mix writing days with coding days, as we like to focus on one thing at a time. Neither of us are big believers in multi-tasking, so sticking to one thing is important to us.

    What’s your favorite part of the writing process?

    Getting to write with my favorite person in the whole world, Audrey Roy Greenfeld. :-)

    Also, having people read our stuff and comment on it, both positively and negatively.

    Do you ever get writer’s block?

    Not usually. Our delays are almost always because of other things getting in the way. We’re very fortunate that way!

    When I do get writers block, I try to do something active. Be it exercise or fix something in the house that needs it.

    Considering you can do cartwheels, I am assuming you are pretty fit. Do you think technology folks don’t give it enough importance?

    I’m older than I look but even dealing with an unpleasant knee injury move faster and better than 90% of software developers. And when I look at other coders my age, I see people old before their years. I believe youth is fleeting unless you take a little bit of time every day to keep your strength and flexibility.

    Anything else you would like to say?

    To paraphrase Jurassic Park, “Just because you can do a thing doesn’t mean you should do a thing.”

    As software developers, we have skills that let us do amazing things. With enough time and experience, we can do pretty much anything we are asked to do. That said, we should consider whether or not we should always do what we are asked to do.

    For example, the combined power of image recognition, big data, and distributed systems is really fun to play with, but we need to be aware that these tools can be dangerous. In the past year we’ve seen it used to affect opinion and elections, and this is only the beginning. It’s our responsibility to the future to be aware that the tools we are playing with have a lot of power, and that the people who are paying us to use them might not have the best intentions.

    Hence why I like to say, “Just because you can do a thing doesn’t mean you should do a thing.”

    Do checkout “Two Scoops of Django 1.11: Best Practices for the Django Web Framework” by Two Scoops Press

    Comments →

    Django Release Schedule and Python 3

    Do long term releases confuse you? For the longest time I was not sure which version of Ubuntu to download - the latest release or the LTS? I see a number of Django developers confused about Django’s releases. So I prepared this handy guide to help you choose (or confuse?).

    Which Version To Use?

    Django has now standardized on a release schedule with three kinds of releases:

    Feature Release: These releases will have new features or improvements to existing features. It will happen every 8 months and will have 16 months of extended support from release. They have version numbers like A.B (note there’s no minor version).

    Long-Term Support (LTS) Release: These are special kind of feature releases, which have a longer extended support of three years from the release date. These releases will happen every two years. They have version numbers like A.2 (since every third feature release will be a LTS). LTS releases have few months of overlap to aid in a smoother migration.

    Patch Release: These releases are bug fixes or security patches. It is recommended to deploy them as soon as possible. Since they have minimal breaking changes, these upgrades should be painless to apply. They have version numbers like A.B.C

    Django roadmap visualized below should make the release approach clearer:

    Django Releases (LTS and feature releases) explained

    The dates are indicative and may change. This is not an official diagram but something that I created for my understanding.

    The big takeaway is that Django 1.11 LTS will be the last release to support Python 2 and it is supported until April 2020. Subsequent versions will use only Python 3.

    The right Django version for you will be based on how frequent you can upgrade your Django installation and what features you need. If your project is actively developed and the Django version can be upgraded at least once in 16 months, then you should install the latest feature release regardless of whether it is LTS or non-LTS.

    Otherwise, if your project is only occasionally developed then you should pick the most recent LTS version. Upgrading your project’s Django dependency from one feature release to another can be a non-trivial effort. So, read the release notes and plan accordingly.

    In any case, make sure you install Patch Releases as soon as they are released. Now, if you are still on Python 2 then read on.

    Python 3 has crossed tipping point

    When I decided to use Python 3 only while writing my book “Django Design Patterns and Best Practices” in 2015, it was a time when Python 2 versus Python 3 was hotly debated. However, to me Python 3 seemed much more cleaner without arcane syntax like class methods named __unicode__ and classes needing to derive from object parent class.

    Now, it is quite a different picture. We just saw how Django no longer supports Python 2 except for the last LTS release. This is a big push for many Python shops to consider Python 3.

    Many platforms have upgraded their default Python interpreter. Starting 1st March 2018, Python 3 is announced to be the default “python” in Homebrew installs. ArchLinux had completely switched to Python 3 since 2010.

    Fedora has switched to Python 3 as its system default since version 23. Even though python command will launch python3, the symlink /usr/bin/python will still point to python2 for backward compatibility. So it is probably a good idea to use #!/usr/bin/env python idiom in your shell scripts.

    On 26 April 2018, when Ubuntu 18.04 LTS (Bionic Beaver) will be released, it is planned to be have Python 3.6 as default. Further upstream, the next Debian release in testing - Debian 10 (Buster) is expected to transition to Python 3.6.

    Moving to packages, the Python 3 Wall of Superpowers shows that with the backing of 190 out of 200 packages, at the time of writing, we have nearly all popular Python packages on Python 3. The only notable package remaining is supervisor, which is about to turn green in supervisor 4.0 (unreleased).

    Common Python 3 Migration Blockers

    You might be aware of atleast one project which is still on Python 2. It could be open source or an internal project, which may be stuck in Python 3 for a number of reasons. I’ve come across a number of such projects and here is my response to such reasons:

    Reason 1: My Project is too complex

    Some very large and complex projects like NumPy or Django have been migrated sucessfully. You can learn the migration strategies of projects like Django. Django maintained a common codebase for Python 2 and 3 using the six (2 × 3=6, get it?) library before switching to Python 3 only.

    Reason 2: I still have time

    It is closer than you think. Python clock shows there is a little more than 2 years and 2 months left for Python 2 support.

    In fact, you have had a lot of time. It has been ten years since Python 3 was announced. That is a lot of overlap to transition from one version to another.

    In today’s ‘move fast and break things’ world, a lot of projects decide to abruptly stop support and ask you to migrate as soon as a new release is out. This is a lot more realistic assumption for enterprises which need a lot more planning and testing.

    Reason 3: I have to learn Python 3

    But you already know most of it! You might need about 10 mins to learn the differences. In fact, I have written a post to guide Django coders to Python 3. Small Django/Python 2 projects need only trivial changes to work on Python 3.

    You might see many old blog posts about Python 3 being buggy or slow. Well, that has not been true for a while. Not only it is extremely stable and bug-free, it is actually used in production by several companies. Performance-wise it has been getting faster in every release, so it is faster than Python 2 in most cases and slower in a few.

    Of course, there are lot of awesome new features and libraries added to Python 3. You can learn them as and when you need them. I would recommend reading the release notes to understand them. I will mention my favourites soon.

    Reason 4: Nobody is asking

    Some people have the philosophy that if nobody is asking then nobody cares. Well, they do care if the application they run is on an unsupported technology. Better plan for the eventual transition than rush it on a higher budget.

    Are you missing out?

    Image by https://pixabay.com/en/users/GlenisAymara-856260/

    To me, the biggest reason to switch was that all the newest and greatest features were coming to Python 3. My favourite top three exclusive features in Python 3 are:

    • asyncio: One of the coolest technologies I picked up recently. The learning process is sometimes mind-bending. But the performance boost in the right situations is incredible.

    • f-strings: They are so incredibly expressive that you would want to use them everywhere. Instant love!

    • dict: New compact dict implementation which uses less memory and are ordered!

    Yes, FOMO is real.

    Apart from my personal reasons, I would recommend everyone to migrate so that the community benefits from investing efforts into a common codebase. Plus we can all be consistent on which Python to recommend to beginners. Because

    There should be one– and preferably only one –obvious way to do it. Although that way may not be obvious at first unless you’re Dutch.

    This article contains an excerpt from the upcoming second edition of book "Django Design Patterns and Best Practices" by Arun Ravindran

    Comments →

    « Newer Page 3 of 38 Older »