Punchscript - A Rajinikanth Inspired Programming Language

    If you could design a new programming language, what would it be like? A question I had ever since I took the Programming Languages (PL) course in the third year of my Computer Science Engineering way back in 2001. For the first time, I have an answer – Punchscript.

    Punchscript is a programming language made up of punch dialogues by the Indian moviestar Rajinikanth. Punch dialogues are more punchlines than dialogues, delivered in Rajini’s inimitable style in the form of an aphorism or a retort.

    Cool Raking scene
    Rajinikanth in Baashha (1995)

    Here is how the customary Fizz Buzz looks like in Punchscript:

    Code Screenshot

    Punchscript works with only signed integer datatype. So instead of boolean logic operators, you’ll need to use integer arithmetic logic. Also, it supports only IF… ELSE rather than the IF… ELSEIF… compound statements. Notice how these limitations are worked around in the code above.

    You can try Punchscript in your browser and run various examples. Despite its limitations, you can write all kinds of non-trivial algorithms. It is Turing Complete, like most languages. So any computation can be performed it it. But I would not recommend betting your next startup on it.

    Chasing the Camel

    Remember my PL course of 2001? Among the half a dozen languages we read about, one language stood out as both elegant and practical. That was - ML (ML stood for Meta Language long before some smart guy recently called it Machine Learning). We learnt Standard ML at that time but even then OCaml was much more popular.

    Then, year after year, nearly every ICFP functional programming contest had OCaml mentioned by one of the top three winners (the trend changed in recent years). The syntax seemed easy enough and I could pick it up in a few days. After a while, I would completely forget about it, get interested in ML again and have to re-learn the whole thing. I have tracked this sisyphean task of learning OCaml in a decade old journal on Google Docs.

    Eventually, I realized I need to do something significant in OCaml. Now which programming problem would be of a moderate size and can be ideally solved in OCaml? The obvious answer was a Compiler (with Ray Tracer being a distant second). The language is excellent at manipulating algebraic data types which is perfect for abstract syntax trees. Even more, the js_to_ocaml tool helps you convert the implementation to Javascript, so that your compiler can run in the browser!

    Now comes the question of which language to implement.

    Why this Kolavari?

    It is way more fun to design your own programming language than to implement an existing one. You start by thinking what new syntax you can come up with.

    I thought - syntactic elements should be short and memorable like… punchlines. A bit of a stretch but sounded like a fun project. Rajnikanth movies seemed to be a goldmine of punch dialogues, with most movies having, at least, one.

    It took a lot of binge watching research to find the best phrase for a loop or condition statement. I learnt Tamil by the ear so it is not perfect. But I must say that inventing a new language from scratch is both demanding and rewarding.

    The hardest part is to select the least amount of syntax while still being useful. It is really tempting to assemble all the favourite features from other languages, but then your project would never get shipped. And as we know, shipping is the most important feature.

    The actual implementation took me 4 to 6 weeks (working mostly weekends including combing through user manuals of OCaml tools). Along the way, there were so many decisions to make, like:

    • Whitespace significance
    • Data structures
    • Compiled or Interpreted
    • Functions or procedures
    • Virtual Machine or direct AST execution

    Funnily enough, I often picked the the third choice of YAGNI. If we can live without a feature, leave it. This not only made the implementation easier but the language more elegant.

    There are so many way to set up an OCaml development environment. I’ve setup up all the modern OCaml tools needed in a Docker container. Keeping everything in Docker makes it easier to manage and reproduce.

    Opening the Toolbox

    The best instructions for setting up a modern OCaml development environment can be found at the Real World OCaml book’s site. The first step is to install Opam – OCaml’s package manager plus isolated environment manager (sort of like Python’s pipenv).

    I started by creating a Docker file based on the opam:alpine_ocaml image. You will need to reproduce the installation commands in the Dockerfile. The only change I made was installing using opam depext instead of opam install, so that external dependencies are installed.

    Next, we need to install OCaml’s compiler building tools - OCamlLex and Menhir. There is an excellent chapter on Parsing in the Real World OCaml book which is a good introduction to using these tools. If you are familiar with YACC and LEX from other programming languages like C, it should be like meeting an old friend.

    Then I went ahead and installed Emacs as well. This is because Emacs needs to call OCaml for its OCaml mode tuareg to work properly. I found this to be extremely convenient to have a self-contained development environment although Docker purists might prefer it to be separate.

    Make sure you are using Dune (previously called JBuilder) for building the project. You will probably still need a Makefile. Most of Punchscript is organized in a library, which can be built into bytecode or Javascript targets.

    Read Issu’s recent post on OCaml Best Practices, if you are interested in an overview of modern OCaml development tools.

    Learning the Mystical Arts

    Setting up everything right might take a while, but that’s only the beginning. There is a ton of resources to writing a compiler or learning OCaml. But hardly any about making a compiler in OCaml.

    Here are some books and articles which really helped me:

    • If you are novice then I would recommend reading OCaml from the Very Beginning.
    • If you are already familiar with OCaml (like I was), then I would recommend reading the OCaml User Manual, available in many formats for offline reading (I wish they had an EPUB too for Kindle readers).
    • If you want to learn or have forgotten compiler theory (what was LALR again?), then you can read Modern Compiler Implementation in ML, by Andrew W. Appel. Just Chapter 3 is enough to brush up on parser.
    • You ought to read the Mehir manual. The examples are extremely helpful.
    • The TOSS tutorial is great at explaining the entire toolchain.

    So you will need to read a lot of documentation to understand most OCaml tools. I know some get daunted by long manuals, but they are quite approachable. You just have to read some introductory sections and you should be good to go.

    Wiring up a Live Demo

    The fun part of the project was building a live demo for the web. The js_of_ocaml tool converts OCaml bytecodes into compact Javascript code. It was surprisingly small. The entire Punchscript interpreter is a single file weighing just 29K gzipped (132K minified)!

    The interpreter is invoked from the page using Web workers. There is something magical about watching a web page interact with your OCaml program synthesized from an almost sterile world of functional programming. Web workers makes the whole interaction asynchronous. No browser hangups while your code is running.

    Cool Rajini scene
    Rajinikanth in Enthiran / Robot (2010)

    I always wanted the examples to have proper syntax highlighting in Punchscript. I used CodeMirror to build the code editor. Writing a custom syntax highlighter seemed to need a lot of Javascript code. So I heavily derived from the Python syntax highlighting mode.

    Taking it Further

    Punchscript is both a language and an implementation. Both have potential to grow. Language specs are public and new punch dialogues are welcome. You are also free to create your own implementation in your favourite language. I would be happy to link them.

    Have fun coding with punch dialogues!


    Thanks to Deepak and Ramakrishnan for reviewing the early drafts of the language specs.

    Comments →

    We are Good!

    TLDR; You can donate easily here

    This goes directly to Kerala Chief Minister's Distress Relief Fund. It's easy to use and completely tax exempt. It also welcomes foreign donations.

    Relieved to share that most of my family and friends have survived the terrible Kerala floods which has been ravaging for days. Of course, everything is not okay. There is a long and painful road ahead to try rebuild what we have lost. But then there are some losses we might never recover from.

    Many friends, colleagues and relatives have been calling from all over the world. Their reactions range from “maybe global warming is real because this is happening everywhere” to “humanity is doomed”. Of course, I am ignoring silly reactions like “must be wrath of god” or “is it fake news”.

    Everything these days is seen as a sign of humanity’s impending DOOM. We have sort of turned hollow. Everything seems to be about money or politics. People are supposed to have no other motive or emotion. Forget a soul.

    I think we need to remember one really important thing – it is not always about the money. Any species be it animal, bird or insect will help another if they see one of their kind is in trouble.

    If an old man collapsed on the road clutching his chest, people wouldn’t first ask which state/country/religion he belongs to before trying to help him. The day we do then trust me we won’t need global warming or even killer robots. Humanity is doomed. Both metaphorically and literally.

    Thankfully there is hope. In times of this crisis, I’ve been in constant touch with my friends who are helping in any way possible. Those who can drive are picking up total strangers. Those who can code are rapidly building resilient applications to coordinate rescue. Those who can cook are preparing thousands of food packets and infant food. Even those who cannot afford four square meals a day are donating whatever they can.

    Even if there is even one human being somewhere thanking you for your act of kindness, doesn’t it mean something? Isn’t that what the ultimate purpose of our life is - to be useful to someone? To make a difference?

    There is an overwhelming humanity around us. And that’s why we will survive. Together.

    PS: Please donate wholeheartedly.

    Comments →

    Understanding Django Channels

    You are reading a post from a two-part tutorial series on Django Channels

    Django Channels

    Django Channels was originally created to solve the problem of handling asynchronous communication protocols like say WebSockets. More and more web applications were providing realtime capabilities like chat and push notifications. Various workarounds were created to make Django support such requirements like running separate socket servers or proxy servers.

    Channels is an official Django project not just for handling WebSockets and other forms of bi-directional communication but also for running background tasks asynchronously. As of writing, Django Channels 2 is out which is a complete rewrite based on Python 3’s async/await based coroutines.

    This article covers the concepts of Django Channels and leads you to a video tutorial implementing a notifications application in Django.

    Here is a simplified block diagram of a typical Channels setup:

    Django Channels

    How a typical Django Channels infrastructure works

    A client such as a web browser sends both HTTP/HTTPS and WebSocket traffic to an ASGI server like Daphene. Like WSGI, the ASGI (Asynchronous Server Gateway Interface) specification is a common way for applications servers and applications to interact with each other asynchronously.

    Like a typical Django application, HTTP traffic is handled synchronously i.e. when the browser sends a request, it waits until it is routed to Django and a response is sent back. However, it gets a lot more interesting when WebSocket traffic happens because it can be triggered from either direction.

    Once a WebSocket connection is established, a browser can send or receive messages. A sent message reaches the Protocol type router that determines the next routing handler based on its transport protocol. Hence you can define a router for HTTP and another for WebSocket messages.

    These routers are very similar to Django’s URL mappers but map the incoming messages to a consumer (rather than a view). A consumer is like an event handler that reacts to events. It can also send messages back to the browser, thereby containing the logic for a fully bi-directional communication.

    A consumer is a class whose methods you may choose to write either as normal Python functions (synchronous) or as awaitables (asynchronous). Asynchronous code should not mix with synchronous code. So there are conversion functions to convert from async to sync and back. Remember that the Django parts are synchronous. A consumer in fact a valid ASGI application.

    So far, we have not used the channel layer. Ironically, you can write Channel applications without using Channels! But they are not particularly useful as there is no easy communication path between application instances, other than polling a database. Channels provides exactly that, a fast point-to-point and broadcast messaging between application instances.

    A channel is like a pipe. A sender sends a message to this pipe from one end and it reaches a listener at the other end. A group defines a group of channels who are all listening to a topic. Every consumer listens to own auto-generated channel accessed by its self.channel_name attribute.

    In addition to transports, you can trigger a consumer listening to a channel by sending a message, thereby starting a background task. This works as a very quick and simple background worker system.

    Building a Channels Application Step-by-step

    The following screencast covers the creation of a notification application using Django Channels. You can access the code on Github. The intermediate projects like the Echo Consumer can be accessed as branches of the git repository.

    The show notes can be accessed on Github.

    Check out the video tutorial and let me know if you found it useful! To know more, read the Django Channels documentation.

    This article contains an excerpt from "Django Design Patterns and Best Practices" by Arun Ravindran

    Comments →

    Get started with Async & Await

    You are reading a post from a two-part tutorial series on Django Channels

    Asyncio

    Asyncio is a co-operative multitasking library available in Python since version 3.6. Celery is fantastic for running concurrent tasks out of process, but there are certain times you would need to run multiple tasks in a single thread inside a single process.

    If you are not familiar with async/await concepts (say from JavaScript or C#) then it involves a bit of steep learning curve. However, it is well worth your time as it can speed up your code tremendously (unless it is completely CPU-bound). Moreover, it helps in understanding other libraries built on top of them like Django Channels.

    This post is an attempt to explain the concepts in a simplified manner rather than try to be comprehensive. I want you to start using asynchronous programming and enjoy it. You can learn the nitty gritties later.

    All asyncio programs are driven by an event loop, which is pretty much an indefinite loop that calls all registered coroutines in some order until they all terminate. Each coroutine operates cooperatively by yielding control to fellow coroutines at well-defined places. This is called awaiting.

    A coroutine is like a special function which can suspend and resume execution. They work like lightweight threads. Native coroutines use the async and await keywords, as follows:

    import asyncio
    
    
    async def sleeper_coroutine():
        await asyncio.sleep(5)
    
    
    if __name__ == '__main__':
        loop = asyncio.get_event_loop()
        loop.run_until_complete(sleeper_coroutine())
    

    This is a minimal example of an event loop running one coroutine named sleeper_coroutine. When invoked this coroutine runs until the await statement and yields control back to the event loop. This is usually where an Input/Output activity occurs.

    The control comes back to the coroutine at the same line when the activity being awaited is completed (after five seconds). Then then coroutine returns or is considered completed.

    Explain async and await

    [TLDR; Watch my screencast to understand this section with a lot more code examples.]

    Initially, I was confused by the presence of the new keywords in Python: async and await. Asynchronous code seemed to be littered with these keywords yet it was not clear what they did or when to use them.

    Let’s first look at the async keyword. Commonly used before a function definition as async def, it indicates that you are defining a (native) coroutine.

    You should know two things about coroutines:

    1. Don’t perform slow or blocking operations synchronously inside coroutines.
    2. Don’t call a coroutine directly like a regular function call. Either schedule it in an event loop or await it from another coroutine.

    Unlike a normal function call, if you invoke a coroutine its body will not get executed right away. Instead it will be suspended and returns a coroutine object. Invoking the send method of this coroutine will start the execution of the coroutine body.

    >>> async def hi():
    ...     print("HOWDY!")
    ...
    >>> o = hi()
    >>> o
    <coroutine object hi at 0x000001DAE26E2F68>
    >>> o.send(None)
    HOWDY!
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    StopIteration
    >>>
    

    However, when the coroutine returns it will end in a StopIteration exception. Hence it is better to use the asyncio provided event loop to run a coroutine. The loop will handle exceptions in addition to all other machinery for running coroutines concurrently.

    >>> import asyncio
    >>> loop = asyncio.get_event_loop()
    >>> o = hi()
    >>> loop.run_until_complete(o)
    HOWDY!
    

    Next we have the await keyword which must be only used inside a coroutine. If you call another coroutine, chances are that it might get blocked at some point, say while waiting for I/O.

    >>> async def sleepy():
    ...     await asyncio.sleep(3)
    ...
    >>> o = sleepy()
    >>> loop.run_until_complete(o)
    # After three seconds
    >>>
    

    The sleep coroutine from asyncio module is different from its synchronous counterpart time.sleep. It is non-blocking. This means that other coroutines can be executed while this coroutine is awaiting the sleep to be completed.

    When a coroutine uses the await keyword to call another coroutines, it acts like a bookmark. When a blocking operation happens, it suspends the coroutine (and all the coroutines who are await-ing it) and returns control back to the event loop. Later, when the event loop is notified of the completion of the blocking operation, then the execution is resumed from the await expression paused and continues onward.

    Asyncio vs Threads

    If you have worked on multi-threaded code, then you might wonder – Why not just use threads? There are several reasons why threads are not popular in Python.

    Firstly, threads need to be synchronized while accessing shared resources or we will have race conditions. There are several types of synchronization primitives like locks but essentially, they involve waiting which degrades performance and could cause deadlocks or starvation.

    A thread may be interrupted any time. Coroutines have well-defined places where execution is handed over i.e. co-operative multitasking. As a result, you may make changes to a shared state as long as you leave it in a known state. For instance you can retrieve a field from a database, perform calculations and overwrite the field without worrying that another coroutine might have interrupted you in between. All this is possible without locks.

    Secondly, coroutines are lightweight. Each coroutine needs an order of magnitude less memory than a thread. If you can run a maximum of hundreds of threads, then you might be able to run tens of thousands of coroutines given the same memory. Thread switching also takes some time (few milliseconds). This means you might be able to run more tasks or serve more concurrent users (just like how Node.js works on a single thread without blocking).

    The downsides of coroutines is that you cannot mix blocking and non-blocking code. So once you enter the event loop, rest of the code driven by it must be written in asynchronous style, even the standard or third-party libraries you use. This might make using some older libraries with synchronous code somewhat difficult.

    If you really want to call asynchronous code from synchronous or vice versa, then do read this excellent overview of various cases and adaptors you can use by Andrew Godwin.

    The Classic Web-scraper Example

    Let’s look at an example of how we can rewrite synchronous code into asynchronous. We will look at a webscraper which downloads pages from a couple of URLs and measures its size. This is a common example because it is very I/O bound which shows a significant speedup when handled concurrently.

    Synchronous web scraping

    The synchronous scraper uses Python 3 standard libraries like urllib. It downloads the home page of three popular sites and the fourth is a large file to simulate a slow connection. It prints the respective page sizes and the total running time.

    Here is the code for the synchronous scraper:

    # sync.py
    """Synchronously download a list of webpages and time it"""
    from urllib.request import Request, urlopen
    from time import time
    
    sites = [
        "https://news.ycombinator.com/",
        "https://www.yahoo.com/",
        "https://github.com/",
    ]
    
    
    def find_size(url):
        req = Request(url)
        with urlopen(req) as response:
            page = response.read()
            return len(page)
    
    
    def main():
        for site in sites:
            size = find_size(site)
            print("Read {:8d} chars from {}".format(size, site))
    
    
    if __name__ == '__main__':
        start_time = time()
        main()
        print("Ran in {:6.3f} secs".format(time() - start_time))
    

    On a test laptop, this code took 5.4 seconds to run. It is the cumulative loading time of each site. Let’s see how asynchronous code runs.

    Asynchronous web scraping

    This asyncio code requires installation of a few Python asynchronous network libraries such as aiohttp and aiodns. They are mentioned in the docstring.

    Here is the code for the asynchronous scraper – it is structured to be as close as possible to the synchronous version so it is easier to compare:

    # async.py
    """
    Asynchronously download a list of webpages and time it
    
    Dependencies: Make sure you install aiohttp using: pip install aiohttp aiodns
    """
    import asyncio
    import aiohttp
    from time import time
    
    # Configuring logging to show timestamps
    import logging
    logging.basicConfig(format='%(asctime)s %(message)s', datefmt='[%H:%M:%S]')
    log = logging.getLogger()
    log.setLevel(logging.INFO)
    
    sites = [
        "https://news.ycombinator.com/",
        "https://www.yahoo.com/",
        "https://github.com/",
    ]
    
    
    async def find_size(session, url):
        log.info("START {}".format(url))
        async with session.get(url) as response:
            log.info("RESPONSE {}".format(url))
            page = await response.read()
            log.info("PAGE {}".format(url))
            return url, len(page)
    
    
    async def main():
        tasks = []
        async with aiohttp.ClientSession() as session:
            for site in sites:
                tasks.append(find_size(session, site))
            results = await asyncio.gather(*tasks)
        for site, size in results:
            print("Read {:8d} chars from {}".format(size, site))
    
    
    if __name__ == '__main__':
        start_time = time()
        loop = asyncio.get_event_loop()
        loop.set_debug(True)
        loop.run_until_complete(main())
        print("Ran in {:6.3f} secs".format(time() - start_time))
    

    The main function is a coroutine which triggers the creation of a separate coroutine for each website. Then it awaits until all these triggered coroutines are completed. As a best practice, the web session object is passed to avoid re-creating new sessions for each page.

    The total running time of this program on the same test laptop is 1.5 s. This is a speedup of 3.6x on the same single core. This surprising result can be better understood if we can visualize how the time was spent, as shown below:

    Comparing scrapers

    A simplistic representation comparing tasks in the synchronous and asynchronous scrapers

    The synchronous scraper is easy to understand. Scraping activity needs very little CPU time and the majority of the time is spent waiting for the data to arrive from the network. Each task is waiting for the previous task to complete. As a result the tasks cascade sequentially like a waterfall.

    On the other hand the asynchronous scraper starts the first task and as soon as it starts waiting for I/O, it switches to the next task. The CPU is hardly idle as the execution goes back to the event loop as soon as the waiting starts. Eventually the I/O completes in the same amount of time but due to the multiplexing of activity, the overall time taken is drastically reduced.

    In fact, the asynchronous code can be speeded up further. The standard asyncio event loop is written in pure Python and provided as a reference implementation. You can consider faster implementations like uvloop for further speedup (my running time came down to 1.3 secs).

    Concurrency is not Parallelism

    Concurrency is the ability to perform other tasks while you are waiting on the current task. Imagine you are cooking a lot of dishes for some guests. While waiting for something to cook, you are free to do other things like peeling onions or cutting vegetables. Even when one person cooks, typically there will be several things happening concurrently.

    Parallelism is when two or more execution engines are performing a task. Continuing on our analogy, this is when two or more cooks work on the same dish to (hopefully) save time.

    It is very easy to confuse concurrency and parallelism because they can happen at the same time. You could be concurrently running tasks without parallelism or vice versa. But they refer to two different things. Concurrency is a way of structuring your programs while Parallelism refers to how it is executed.

    Due to the Global Interpreter Lock, we cannot run more than one thread of the Python interpreter (to be specific, the standard CPython interpreter) at a time even in multicore systems. This limits the amount of parallelism which we can achieve with a single instance of the Python process.

    Optimal usage of your computing resources require both concurrency and parallelism. Concurrency will help you avoid idling the processor core while waiting for say I/O events. While parallelism will help distribute work among all the available cores.

    In both cases, you are not executing synchronously i.e. waiting for a task to finish before moving on to another task. Asynchronous systems might seem to be the most optimal. However, they are harder to build and reason about.

    Why another Asynchronous Framework?

    Asyncio is by no means the first cooperative multitasking or light-weight thread library. If you have used gevent or eventlet, you might find asyncio needs more explicit separation between synchronous and asynchronous code. This is usually a good thing.

    Gevent, relies on monkey-patching to change blocking I/O calls to non-blocking ones. This can lead to hard to find performance issues due to an unpatched blocking call slowing the event loop. As the Zen says, ‘Explicit is better than Implicit’.

    Another objective of asyncio was to provide a standardized concurrency framework for all implementations like gevent or Twisted. This not only reduces duplicated efforts by library authors but also ensures that code is portable for end users.

    Personally, I think the asyncio module can be more streamlined. There are a lot of ideas which somewhat expose implementation details (e.g. native coroutines vs generator-based coroutines). But it is useful as a standard to write future-proof code.

    Can we use asyncio in Django?

    Strictly speaking, the answer is No. Django is a synchronous web framework. You might be able to run a seperate worker process, say in Celery, to run an embedded event loop. This can be used for I/O background tasks like web scraping.

    However, Django Channels changes all that. Django might fit in the asynchronous world after all. But that’s the subject of another post.

    This article contains an excerpt from "Django Design Patterns and Best Practices" by Arun Ravindran

    Comments →

    Interview with Daniel Roy Greenfeld (PyDanny)

    Daniel Roy Greenfeld needs no introduction to Djangonauts. Co-author of the book Two Scoops of Django which is probably on the shelves of any serious Django practitioner. But PyDanny, as he is fondly known as, is also a wonderful fiction author, fitness enthusiast and a lot more.

    Having known Daniel for a while as a wonderful friend and a great inspiration, I am so excited that he agreed to my interview. Let’s get started…

    PyDanny Photo

    How did the idea of writing an ice-cream themed book occur?

    The first 50 pages that I wrote were angry and were for a book with an angry name. You see, I was tired of having to pick up after sloppy or unwise coding practices on rescue projects. I was furious and wanted to fix the world.

    However, I was getting stuck in what I wanted to say, or didn’t know things. I kept asking my normal favorite resource for help, Audrey Roy Greenfeld. Eventually she started to write (or rewrite) whole sections and I realized that I wasn’t writing the book alone.

    Therefore I asked Audrey to be my co-author. She’s a cheerful person and said that if she were to accept, the book had to be lightened. That meant changing the name. After a lot of different title names discussed over many ice cream sessions, we decided to use the subject matter at hand. Which worked out well as the ice cream theme made for a good example subject.

    How do you and Audrey collaborate while writing a book?

    We take turns writing original material that interests us. The other person follows them and acts as editor and proofreader. We go back and forth a few hundred times and there you go.

    For tech writing we use Git as version control and LaTeX for formatting. For fiction we use Google docs followed by some Python scripts that merge and format the files.

    What’s the most exciting recent development in Django? Where do you think it can improve?

    I like the new URL system as it’s really nice for beginners and advanced coders alike. While I like writing regular expressions, I’m the exception in this case.

    Where I think where Django can improve is having more non-US/Europe/Australian representation within the DSF and in the core membership. In short, most of Django core looks like me, and I think that’s wrong. Many of the best Django (and Python) people look nothing like me, and they deserve more recognition. While having Anna Makarudze on the DSF board is a wonderful development, as a community we can still do better in core.

    In the case of Django’s core team, I believe this has happened because all the major Django conferences are in the US, Europe, and Australia, and from what I’ve seen over the years it’s through participation in those events is how most people get onto the Django core team. The DSF is aware of the problem, but I think more people should raise it as an issue. More people vocalizing this as a problem will get it resolved more quickly.

    With the Ambria fantasy series, you have proven to be a prolific fiction author too. Reminds me Lewis Carroll who wrote children’s books and mathematical treatises. What is the difference in the writing process while writing fiction and non-fiction?

    For us, the process is very similar. We both write and we both review our stuff. The difference is that if we make a mistake in our fiction, it’s not as critical. That means that the review process for fiction is a lot easier on us then it is to write technical books or articles. I can’t begin to tell you what a load that is off my shoulders.

    Why fantasy? Any literary influences?

    We like fantasy because we can just let our imaginations run away with us. For the Ambria series, our influences include Tolkien, Joseph Campbell, Glen Cook, Greek mythology, and various equine and religious studies.

    Do you have a daily writing routine?

    Like coding on a fun project, when we get to write, we get up early and just start working. When we get hungry or thirsty we stop. The day seems to fly by and we are very happy. We try not to mix writing days with coding days, as we like to focus on one thing at a time. Neither of us are big believers in multi-tasking, so sticking to one thing is important to us.

    What’s your favorite part of the writing process?

    Getting to write with my favorite person in the whole world, Audrey Roy Greenfeld. :-)

    Also, having people read our stuff and comment on it, both positively and negatively.

    Do you ever get writer’s block?

    Not usually. Our delays are almost always because of other things getting in the way. We’re very fortunate that way!

    When I do get writers block, I try to do something active. Be it exercise or fix something in the house that needs it.

    Considering you can do cartwheels, I am assuming you are pretty fit. Do you think technology folks don’t give it enough importance?

    I’m older than I look but even dealing with an unpleasant knee injury move faster and better than 90% of software developers. And when I look at other coders my age, I see people old before their years. I believe youth is fleeting unless you take a little bit of time every day to keep your strength and flexibility.

    Anything else you would like to say?

    To paraphrase Jurassic Park, “Just because you can do a thing doesn’t mean you should do a thing.”

    As software developers, we have skills that let us do amazing things. With enough time and experience, we can do pretty much anything we are asked to do. That said, we should consider whether or not we should always do what we are asked to do.

    For example, the combined power of image recognition, big data, and distributed systems is really fun to play with, but we need to be aware that these tools can be dangerous. In the past year we’ve seen it used to affect opinion and elections, and this is only the beginning. It’s our responsibility to the future to be aware that the tools we are playing with have a lot of power, and that the people who are paying us to use them might not have the best intentions.

    Hence why I like to say, “Just because you can do a thing doesn’t mean you should do a thing.”

    Do checkout “Two Scoops of Django 1.11: Best Practices for the Django Web Framework” by Two Scoops Press

    Comments →

    Page 1 of 36 Older »