3 Effective Examples of Django Async Views without Sleeping

    In August this year, Django 3.1 arrived with support for Django async views. This was fantastic news but most people raised the obvious question – What can I do with it? There have been a few tutorials about Django asynchronous views that demonstrate asynchronous execution while calling asyncio.sleep. But that merely led to the refinement of the popular question – What can I do with it besides sleep-ing?

    The short answer is – it is a very powerful technique to write efficient views. For a detailed overview of what asynchronous views are and how they can be used, keep on reading. If you are new to asynchronous support in Django and like to know more background, read my earlier article: A Guide to ASGI in Django 3.0 and its Performance.

    Django Async Views

    Django now allows you to write views which can run asynchronously. First let’s refresh your memory by looking at a simple and minimal synchronous view in Django:

    def index(request):
        return HttpResponse("Made a pretty page")
    

    It takes a request object and returns a response object. In a real world project, a view does many things like fetching records from a database, calling a service or rendering a template. But they work synchronously or one after the other.

    In Django’s MTV (Model Template View) architecture, Views are disproportionately more powerful than others (I find it comparable to a controller in MVC architecture though these things are debatable). Once you enter a view you can perform almost any logic necessary to create a response. This is why Asynchronous Views are so important. It lets you do more things concurrently.

    It is quite easy to write an asynchronous view. For example the asynchronous version of our minimal example above would be:

    async def index_async(request):
        return HttpResponse("Made a pretty page asynchronously.")
    

    This is a coroutine rather than a function. You cannot call it directly. An event loop needs to be created to execute it. But you do not have to worry about that difference since Django takes care of all that.

    Note that this particular view is not invoking anything asynchronously. If Django is running in the classic WSGI mode, then a new event loop is created (automatically) to run this coroutine. So in this case, it might be slightly slower than the synchronous version. But that’s because you are not using it to run tasks concurrently.

    So then why bother writing asynchronous views? The limitations of synchronous views become apparent only at a certain scale. When it comes to large scale web applications probably nothing beats FaceBook.

    Views at Facebook

    In August, Facebook released a static analysis tool to detect and prevent security issues in Python. But what caught my eye was how the views were written in the examples they had shared. They were all async!

    # views/user.py
    async def get_profile(request: HttpRequest) -> HttpResponse:
       profile = load_profile(request.GET['user_id'])
       ...
     
    # controller/user.py
    async def load_profile(user_id: str):
       user = load_user(user_id) # Loads a user safely; no SQL injection
       pictures = load_pictures(user.id)
       ...
     
    # model/media.py
    async def load_pictures(user_id: str):
       query = f"""
          SELECT *
          FROM pictures
          WHERE user_id = {user_id}
       """
       result = run_query(query)
       ...
     
    # model/shared.py
    async def run_query(query: str):
       connection = create_sql_connection()
       result = await connection.execute(query)
       ...
    

    Note that this is not Django but something similar. Currently, Django runs the database code synchronously. But that may change sometime in the future.

    If you think about it, it makes perfect sense. Synchronous code can be blocked while waiting for an I/O operation for several microseconds. However, its equivalent asynchronous code would not be tied up and can work on other tasks. Therefore it can handle more requests with lower latencies. More requests gives Facebook (or any other large site) the ability to handle more users on the same infrastructure.

    Illustration
    Scalability Problems in the 1800s, I suppose

    Even if you are not close to reaching Facebook scale, you could use Python’s asyncio as a more predictable threading mechanism to run many things concurrently. A thread scheduler could interrupt in between destructive updates of shared resources leading to difficult to debug race conditions. Compared to threads, coroutines can achieve a higher level of concurrency with very less overhead.

    Misleading Sleep Examples

    As I joked earlier, most of the Django async views tutorials show an example involving sleep. Even the official Django release notes had this example:

    async def my_view(request):
        await asyncio.sleep(0.5)
        return HttpResponse('Hello, async world!')
    

    To a Python async guru this code might indicate the possibilities that were not previously possible. But to the vast majority, this code is misleading in many ways.

    Firstly, the sleep happening synchronously or asynchronously makes no difference to the end user. The poor chap who just opened the URL linked to that view will have to wait for 0.5 seconds before it returns a cheeky “Hello, async world!”. If you are a complete novice, you may have expected an immediate reply and somehow the “hello” greeting to appear asynchronously half a second later. Of course, that sounds silly but then what is this example trying to do compared to a synchronous time.sleep() inside a view?

    The answer is, as with most things in the asyncio world, in the event loop. If the event loop had some other task waiting to be run then that half second window would give it an opportunity to run that. Note that it may take longer than that window to complete. Cooperative Multithreading assumes that everyone works quickly and hands over the control promptly back to the event loop.

    Secondly, it does not seem to accomplish anything useful. Some command-line interfaces use sleep to give enough time for users to read a message before disappearing. But it is the opposite for web applications - a faster response from the web server is the key to a better user experience. So by slowing the response what are we trying to demonstrate in such examples?

    Illustration
    Letting them Sleep Would Be Better Idea

    The best explanation for such simplified examples I can give is convenience. It needs a bit more setup to show examples which really need asynchronous support. That’s what we are trying to explore here.

    Better examples

    A rule of thumb to remember before writing an asynchronous view is to check if it is I/O bound or CPU-bound. A view which spends most of the time in a CPU-bound activity for e.g. matrix multiplication or image manipulation would really not benefit from rewriting them to async views. You should be focussing on the I/O bound activities.

    Invoking Microservices

    Most large web applications are moving away from a monolithic architecture to one composed of many microservices. Rendering a view might require the results of many internal or external services.

    In our example, an ecommerce site for books renders its front page - like most popular sites - tailored to the logged in user by displaying recommended books. The recommendation engine is typically implemented as a separate microservice that makes recommendations based on past buying history and perhaps a bit of machine learning by understanding how successful its past recommendations were.

    In this case, we also need the results of another microservice that decides which promotional banners to display as a rotating banner or slideshow to the user. These banners are not tailored to the logged in user but change depending on the items currently on sale (active promotional campaign) or date.

    Let’s look at how a synchronous version of such a page might look like:

    def sync_home(request):
        """Display homepage by calling two services synchronously"""
        context = {}
        try:
            response = httpx.get(PROMO_SERVICE_URL)
            if response.status_code == httpx.codes.OK:
                context["promo"] = response.json()
            response = httpx.get(RECCO_SERVICE_URL)
            if response.status_code == httpx.codes.OK:
                context["recco"] = response.json()
        except httpx.RequestError as exc:
            print(f"An error occurred while requesting {exc.request.url!r}.")
        return render(request, "index.html", context)
    

    Here instead of the popular Python requests library we are using the httpx library because it supports making synchronous and asynchronous web requests. The interface is almost identical.

    The problem with this view is that the time taken to invoke these services add up since they happen sequentially. The Python process is suspended until the first service responds which could take a long time in a worst case scenario.

    Let’s try to run them concurrently using a simplistic (and ineffective) await call:

    async def async_home_inefficient(request):
        """Display homepage by calling two awaitables synchronously (does NOT run concurrently)"""
        context = {}
        try:
            async with httpx.AsyncClient() as client:
                response = await client.get(PROMO_SERVICE_URL)
                if response.status_code == httpx.codes.OK:
                    context["promo"] = response.json()
                response = await client.get(RECCO_SERVICE_URL)
                if response.status_code == httpx.codes.OK:
                    context["recco"] = response.json()
        except httpx.RequestError as exc:
            print(f"An error occurred while requesting {exc.request.url!r}.")
        return render(request, "index.html", context)
    

    Notice that the view has changed from a function to a coroutine (due to async def keyword). Also note that there are two places where we await for a response from each of the services. You don’t have to try to understand every line here, as we will explain with a better example.

    Interestingly, this view does not work concurrently and takes the same amount of time as the synchronous view. If you are familiar with asynchronous programming, you might have guessed that simply awaiting a coroutine does not make it run other things concurrently, you will just yield control back to the event loop. The view still gets suspended.

    Let’s look at a proper way to run things concurrently:

    async def async_home(request):
        """Display homepage by calling two services asynchronously (proper concurrency)"""
        context = {}
        try:
            async with httpx.AsyncClient() as client:
                response_p, response_r = await asyncio.gather(
                    client.get(PROMO_SERVICE_URL), client.get(RECCO_SERVICE_URL)
                )
    
                if response_p.status_code == httpx.codes.OK:
                    context["promo"] = response_p.json()
                if response_r.status_code == httpx.codes.OK:
                    context["recco"] = response_r.json()
        except httpx.RequestError as exc:
            print(f"An error occurred while requesting {exc.request.url!r}.")
        return render(request, "index.html", context)
    

    If the two services we are calling have similar response times, then this view should complete in _half _the time compared to the synchronous version. This is because the calls happen concurrently as we would want.

    Let’s try to understand what is happening here. There is an outer try…except block to catch request errors while making either of the HTTP calls. Then there is an inner async…with block which gives a context having the client object.

    The most important line is one with the asyncio.gather call taking the coroutines created by the two client.get calls. The gather call will execute them concurrently and return only when both of them are completed. The result would be a tuple of responses which we will unpack into two variables response_p and response_r. If there were no errors, these responses are populated in the context sent for template rendering.

    Microservices are typically internal to the organization hence the response times are low and less variable. Yet, it is never a good idea to rely solely on synchronous calls for communicating between microservices. As the dependencies between services increases, it creates long chains of request and response calls. Such chains can slow down services.

    Why Live Scraping is Bad

    We need to address web scraping because so many asyncio examples use them. I am referring to cases where multiple external websites or pages within a website are concurrently fetched and scraped for information like live stock market (or bitcoin) prices. The implementation would be very similar to what we saw in the Microservices example.

    But this is very risky since a view should return a response to the user as quickly as possible. So trying to fetch external sites which have variable response times or throttling mechanisms could be a poor user experience or even worse a browser timeout. Since microservice calls are typically internal, response times can be controlled with proper SLAs.

    Ideally, scraping should be done in a separate process scheduled to run periodically (using celery or rq). The view should simply pick up the scraped values and present them to the users.

    Serving Files

    Django addresses the problem of serving files by trying hard not to do it itself. This makes sense from a “Do not reinvent the wheel” perspective. After all, there are several better solutions to serve static files like nginx.

    Illustration
    'Serving simultaneously is not for everyone'

    But often we need to serve files with dynamic content. Files often reside in a (slower) disk-based storage (we now have much faster SSDs). While this file operation is quite easy to accomplish with Python, it could be expensive in terms of performance for large files. Regardless of the file’s size, this is a potentially blocking I/O operation that could potentially be used for running another task concurrently.

    Imagine we need to serve a PDF certificate in a Django view. However the date and time of downloading the certificate needs to be stored in the metadata of the PDF file, for some reason (possibly for identification and validation).

    We will use the aiofiles library here for asynchronous file I/O. The API is almost the same as the familiar Python’s built-in file API. Here is how the asynchronous view could be written:

    async def serve_certificate(request):
        timestamp = datetime.datetime.now().isoformat()
    
        response = HttpResponse(content_type="application/pdf")
        response["Content-Disposition"] = "attachment; filename=certificate.pdf"
        async with aiofiles.open("homepage/pdfs/certificate-template.pdf", mode="rb") as f:
            contents = await f.read()
            response.write(contents.replace(b"%timestamp%", bytes(timestamp, "utf-8")))
        return response
    

    This example illustrates why we need asynchronous template rendering in Django. But until that gets implemented, you could use aiofiles library to pull local files without skipping a beat.

    There are downsides to directly using local files instead of Django’s staticfiles. In the future, when you migrate to a different storage space like Amazon S3, make sure you adapt your code accordingly.

    Handling Uploads

    On the flip side, uploading a file is also a potentially long, blocking operation. For security and organizational reasons, Django stores all uploaded content into a separate ‘media’ directory.

    If you have a form that allows uploading a file, then we need to anticipate that some pesky user would upload an impossibly large one. Thankfully Django passes the file to the view as chunks of a certain size. Combined with aiofile’s ability to write a file asynchronously, we could support highly concurrent uploads.

    async def handle_uploaded_file(f):
        async with aiofiles.open(f"uploads/{f.name}", "wb+") as destination:
            for chunk in f.chunks():
                await destination.write(chunk)
    
    
    async def async_uploader(request):
        if request.method == "POST":
            form = UploadFileForm(request.POST, request.FILES)
            if form.is_valid():
                await handle_uploaded_file(request.FILES["file"])
                return HttpResponseRedirect("/")
        else:
            form = UploadFileForm()
        return render(request, "upload.html", {"form": form})
    

    Again this is circumventing Django’s default file upload mechanism, so you need to be careful about the security implications.

    Where To Use

    Django Async project has full backward compatibility as one of its main goals. So you can continue to use your old synchronous views without rewriting them into async. Asynchronous views are not a panacea for all performance issues, so most projects will still continue to use synchronous code since they are quite straightforward to reason about.

    In fact, you can use both async and sync views in the same project. Django will take care of calling the view in the appropriate manner. However, if you are using async views it is recommended to deploy the application on ASGI servers.

    This gives you the flexibility to try asynchronous views gradually especially for I/O intensive work. You need to be careful to pick only async libraries or mix them with sync carefully (use the async_to_sync and sync_to_async adaptors).

    Hopefully this writeup gave you some ideas.

    Thanks to Chillar Anand and Ritesh Agrawal for reviewing this post. All illustrations courtesy of Old Book Illustrations

    Comments →

    Ray Tracer in Python (Part 6) - Show Notes of "Firing All Cores"

    When it comes to code optimization, there is an overused statement about Premature Optimization. Here is the rarely-shown full quote:

    “Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.” – Donald Knuth

    I understand this as not worrying about small optimizations all the time. It is best to focus on the areas which have the most return on (time) investment. Usually you need to do extensive performance measurements to find those areas but there are some natural places to look.

    So far, we have been not too concerned about performance especially when it affected readability (else we wouldn’t be writing it in Python). However, in the previous part we got a huge speedup by switching to the PyPy interpreter.

    To me the next big opportunity lies in using all the cores. As an experienced Python programmer, I know this means using the multiprocessing module. But I also need to explain why other options, notably Python threads, would not help.

    To demonstrate this I wrote a multi-threaded C program, a multi-threaded Python program and a multi-processing Python program - all doing the same thing. What was most entertaining was to watch the GIL in action where it allows only one thread to execute at a time.

    Race Conditions

    Race conditions are often explained using a bank ATM example. I think it can be explained in a non technical way using many real life scenarios. I wanted to explain it using a cooking example. It’s weird how many asynchronous and parallel concepts can be taught using what we do in the kitchen. So much so that technical job interviews can be made more effective by simply checking your culinary skills (“Sorry we don’t have a whiteboard, just a cutting board for you.").

    Ideally the entire explanation should be animated. My drawing skills are decent but I cannot animate even a bouncing ball. Don’t laugh, just try it. Firstly, there are several cues that any animated object gives which can be only be detected if you watch in slow motion. The Twelve Principles of Animation is a good start.

    Secondly, it is a lot of work. A good animator will know how to give maximum expression with the minimum frames. Timing is also crucial in animation. Even a subtle timing error makes animations look janky or confusing. Mere mortals would need to spend days to eventually produce a 5 second clip.

    Race Conditions
    Animating the Race Conditions

    So I took the safe approach by showing a slideshow. I think it looks decent. I might post it as a separate clip on Race Conditions so that is doesn’t get lost in this video.

    These are the topics we will cover in this episode:

    • Introduction
      • An Embarrassingly Parallel Problems
      • What’s a Thread?
      • Threads in C
      • Threads in Python
      • Race Condition Animated
      • Why Python has GIL
      • Multi-processing in Python
    • Sub-problem: Firing All Cores
    • Coding the solution
      • Command-line argument
      • Diving pixels among processes
      • Passing Data via Files
      • Value for Progressbar
      • Bug Hunting
    • Performance Comparison
    • Topics Not Covered
    • Learning More
    • Final Words

    Here is the video:

    Code for part six is tagged on the Puray Github project

    Show Notes

    Books and articles that can help understand this part:

    Note: References may contain affiliate links

    Wrapping up

    This concludes the ray tracing series that I had conceived more than a year back. If you make your own ray tracer, please don’t forget to tag me.

    Comments →

    Ray Tracer in Python (Part 5) - Show Notes of "Some Light Reflections"

    One of the fascinating ways to build a complex system like a computer is to build it from scratch using simple logic gates (a good book which shows how is The Elements of Computing Systems). Same goes for late John Conway’s Game of Life which starts with a simple set of rules but shows very life-like behaviour. This is called Emergent complexity. I guess the only way to understand something that seems complex is to understand its basic principles.

    Look carefully at the, by now familiar, image that we will be rendering by the end of this part:

    Render of two balls
    Render of the two balls scene

    Notice within the pinkish-purple ball there is a reflection of the red ball reflecting the pink ball. This kind of detailing that unfolds on closer inspection is what gives realistic renders its beauty. Yet it is governed by the most basic laws of Physics like the laws of reflection.

    Reflection

    The law tells us that the angle of reflection is the same as the angle of incidence. But if we apply it to the world of vectors a rather different looking formula emerges that could be derived from the very same law. The reflected ray is again traced and the process continues.

    If you have been following the series closely so far, then you might point out that we have already dealt with this earlier. Yes, diffuse and specular shading are special cases of a material reflecting light. But since we are now considering mirror-like reflecting surfaces it is time to look at the general formula for reflection.

    The computation for each pixel will now increase many fold so the overall render time will increase proportional to the maximum depth of reflections we need.

    Procedural Materials

    A plainly colored object is not quite interesting so you would find even the earliest ray traced images containing a chessboard pattern:

    The Compleat Angler
    The Compleat Angler (1978) by Turner Whitted

    So I introduce a chessboard pattern generated procedurally into the scene. Procedural textures are fascinating and a lot of fun to make. Compared to image textures, they have almost infinite detail. Sort of like analog versus digital.

    The chessboard pattern’s formula is easy to guess so it is an ideal introduction. Once you start playing around there is an entire universe of textures to explore with marble, Voronoi and Perlin noise patterns. Some even go to the extend of building entire scenes with only procedural textures. This is deeply satisfying but probably pointless.

    Plugin Scenes

    Most toy raytracers are happy generating their scene in the main program. But this quickly becomes frustrating when you want to render a couple of examples. The straightforward solution would be to define a scene as data say using JSON and import the scene given as an argument. This is how games like Doom load levels.

    JSON, YAML and other configuration languages are deceptively simple to read but you could spend a lot of time writing them due to their tiny quirks with commas and whitespaces. It is also not suited for scenes generated procedurally which is happens quite a bit in ray tracers. You would soon wish if these languages were Turing complete. So I decided to ditch all that and use plain old Python instead.

    To be honest, I was not comfortable in allowing a given Python file to describe a scene. But the power and flexibility it allows is really a great tradeoff for the security. I used importlib to import modules inspired by Django.

    I can now define a new procedural texture material class inside a scene! This makes the ray tracer quite extensible like a plugin system. I love this approach and look forward to trying this in future projects.

    Accelerating Python

    Towards the end we see a dramatic 7X speedup of the ray tracer due to the use of PyPy. I also mention my rough rule of thumb to increase Python performance:

    Processor Bound? Try Pypy.
    IO Bound? Try AsyncIO.

    If you are learning to improve the performance of your Python program, this would be pretty bad advice. In that case, make sure you first profile your program and identify the performance hotspots. Then try different ways to optimize those places. After you have tried all that and the performance is still bad, then you can use my rule of thumb for unconventional ways to get great results.

    Ray tracing concepts

    With this part, I would have covered all the basic ray tracing concepts that I had planned to cover. The next part would be about improving the performance of the ray tracer by using multiple cores.

    Many have contacted me asking whether I would be covering topics like Dielectrics, Depth of Field, Anti-aliasing etc. I think there are enough books like Ray Tracing Gems and Ray Tracing in One Weekend which cover all that and much more. If you have followed this series then reading those books would be much easier and you would have a ready implementation to tinker with.

    Nevertheless, I may work on a follow-up if people find that useful and time permits. So do let me know.

    These are the topics we will cover in this episode:

    • Introduction
      • Laws of Reflection
      • Stack Overflow
      • Scene Definition
    • Sub-problem: Some Light Reflections
    • Coding the solution
      • Chessboard Material
      • Ground Plane
      • Config vs Code as Config
      • Speedup

    Here is the video:

    Code for part five is tagged on the Puray Github project

    Show Notes

    Books and articles that can help understand this part:

    Further reading on ray tracing:

    Note: References may contain affiliate links

    Comments →

    Ray Tracer in Python - Show Notes of "Ray Tracing a Coronavirus"

    The first ray-traced animation I had ever seen was in my first year of Engineering. My senior showed us a gold-plated logo of the institution slowly spinning and glinting in the light. It had an amazing level of detail - a triangular frame, a gear wheel on the left, a tower on the right and even a palm tree in the center. We were dying to know how he did it. When he showed us, it blew our minds - it was all code! Every shape was made by combining (or subtracting) primitive solids like spheres, cylinders and cubes. It featured on the masthead of the first edition of our online magazine (distributed on ​3 1⁄2 inch floppies).

    Of course, these days 3D modelling is mostly done by artists in a graphics program like 3ds Max or Maya. But not everything is tinkered by the hand of an artist. Procedural code is still used to create thousands of unique soldiers in an army scene or an infinite world in a multiplayer game. Computer generated art can often surprise or amuse you.

    So when I had to design the model of the omnipresent SAR-Cov-2 virus for a poster, even though I was very tempted to use such tools, I opted to generate the model in code. My Python raytracer (currently in it’s fourth part of the video series) was decent enough to create 3D looking spheres. Having only spheres is a big constraint but you can be more creative under constraints.

    Deconstruction

    There is a process fundamental to artistically recreating any real world object – Deconstruction. When you learn to draw you are asked to “block out” the simple shapes you are trying to draw. So a face becomes an elongated sphere, a nose becomes a triangle etc. If your big shapes are correctly proportioned then it will be a lot easier to add in the details.

    In good old days, the computer artists used graph paper. Like my senior who traced the logo from the college magazine, enlarged it on graph paper and wrote down all the coordinates. These days you could use a 2D drawing program like Inkscape to overlay a photograph and trace over it. But the idea is essentially the same - find all the large blocking shapes that make your image.

    Prince of Persia art
    Graph paper sketch of Prince of Persia by Jordan Mechner in 1987

    Reconstruction

    Generating the model in the computer can be a process of trial and error. In the days of slower computers, rendering a simple object took nearly an hour. Even with today’s dramatically faster machines, rendering a complex scene could take hours. There is simply a ton of numbers to crunch.

    In the case of the coronavirus, I needed to figure out how to make a crown. Essentially I needed to position points evenly around a circle. People say a lot of things about high school trigonometry and how much of a waste of time it was. But I found it quite useful not only to arrange the spikes of the crown but also give it a slight random shift. This would mean each time you render the virus would look sightly different. I had a lot of fun animating several renders to a beat in the video.

    Presentation

    I couldn’t stop playing with the model even after the poster was made. I tweaked it into various color combinations that I found online. Ironically, it looks distinctively charming in any avatar.

    It shows that simple things are awesome than the most professionally done creations. Especially if you could make your own. Looking forward to seeing what others would create with this.

    Here is the video:

    Code for part four is the starting point of the video.

    Final code is also tagged in the repository.

    Show Notes

    Links:

    Comments →

    Ray Tracer in Python (Part 4) - Show Notes of "Let there be light"

    Probably a global lockdown is a good time to resume a series that demands a lot of your time. Not really if your kids are home too. But I’ve been getting enough reminders through comments and emails that I thought it’s time to tackle this challenging episode.

    Yes, this is an episode with a lot of Physics and Mathematics. But you will observe it has a very gentle learning curve since there is enough background presented in the problem requirements and in previous episodes. The real challenge is in translating the math to code.

    Computers are computing devices crunching numbers all the time. So you might think implementing math formulas should be straightforward. Unless you are a scientist or a mathematician, in which case, you would know the truth which is – it is not straightforward at all.

    I remember before Doom came out, it was very hard to develop a 3D engine. Most of the available research involved understanding computer graphics text books with a lot of math theory. It’s a painful task to translate formulas into algorithms and later performant code. There are many ways you could get it wrong. But only one way to get it right. Even more frustrating is forgetting to account for edge conditions like a negative result or divide by zero and floating point errors.

    I remember referring one of the most popular books at that time Numerical Recipes in C++ to understand how to implement some of the algorithms. But I had to abandon the project after a working prototype because making a game engine (alone) is a tonne of work!

    In this part, I have used a lot of images and slides to make sure that I don’t skip any explanations (like how cosine and dot products are related). I know that these videos are being watched by all kinds people – students, experienced programmers and non-programmers. So I have not made any assumptions.

    Hopefully you would enjoy this part without any challenges. Even if you do, it should be fun 😊

    These are the topics we will cover in this episode:

    • Introduction
      • Adding lights
      • Phong reflection model
      • Ambient, Diffuse and Specular Shading
    • Sub-problem: Let there be light
    • Coding the solution
      • New classes
      • Command-line arguments with argparse
      • Progress Bar(?)

    Here is the video:

    Code for part four is tagged on the Puray Github project

    Show Notes

    Books and articles that can help understand this part:

    • Game Engine Architecture Great introduction to graphics concepts and techniques used in computer games. It might be different from raytracing since focus is on realtime rendering rather than realism.

    • Argparse Tutorial argparse is the new standard for Python argument parsing.

    Note: References may contain affiliate links

    Comments →

    « Newer Page 2 of 39 Older »