Benchmarking Alusus Language in Web Dev Against JS and Python
Alusus is a versatile programming language that combines the performance of low-level languages with the flexibility of high-level languages, and allows the user to extend it within the scope of their program as needed. In previous articles, we introduced it and how to use it to build web applications, but we didn’t yet touch on the important subject of performance. In this article we’ll benchmark Alusus against JS and Python in the context of web development, both at the server side as well as the front-end side.
We will begin by evaluating server performance by executing a simple, functionally identical program written in the three languages. Then we will apply a load to it using the JMeter benchmarking tool to measure response rate and resource consumption. After that, we move on to comparing user interface performance, where the comparison will be limited to JavaScript and Alusus since Python does not run inside the browser. We will build a user interface using the React library on the JS side, and using the WebPlatform framework on Alusus side. We will then analyze the performance of the two applications in terms of responsiveness and resource consumption using the performance monitoring tool built into the Firefox browser.
We chose JavaScript and Python because they are the most widespread among programmers when it comes to web development, and the same goes for the React library, which we chose since it is the most widespread among user interface libraries.
Benchmark Subject
The application we will use for the benchmark is a simple application that loads data from a database then displays it in the browser as a table. Our database will contain the passenger data of the Titanic ship, which can be downloaded from this link:
https://neon.com/docs/import/import-sample-data#titanic-passenger-data
This application has one back-end endpoint that loads all the data from the table, which contains over 1300 records, and returns the entire data set to the caller in JSON format. The user interface will display the data loaded from the endpoint in a table format. Our program will not provide the ability to modify data on the server, but the user interface will show input fields that allow the user to add and modify data within the user interface only, without sending the data to the server. The purpose of this is to compare the performance of Alusus and WebPlatform in the user interface.
We will use a PostgreSQL database and write the server code in the three languages. For the user interface, we will write it in Alusus using WebPlatform, and we will write the other version of the user interface using JavaScript and React.
The complete source code in all three languages can be found in this repository:
https://github.com/sarmadka/TitanicExample
Benchmark Environment
We will conduct the comparison on a laptop with the following specifications:
| Laptop brand | Asus Zenbook S16 |
| Processor | AMD Ryzen AI 9 HX 370 |
| Memory | 32 GB LPDDR5X 7500 |
| Storage | Western Digital 1TB NVMe Gen 4 x4 |
| Operating System | Ubuntu 24.04 |
We will use the following versions for the JavaScript app:
| JavaScript (NodeJS) | 20.12.2 |
| Express library | 4.19.2 |
| Sequelize library | 6.37.3 |
And these are the versions for Python app:
| Python | 3.12.3 |
| Flask library | 3.1.2 |
| Gunicorn server | 25.1.0 |
Finally, the Alusus based app uses the following versions:
| Alusus | 0.14.2 |
| WebPlatform library | 0.7.1 |
| Rows library | 0.3.3 |
Server Performance Comparison
We will start the first server comparison using a single thread, running it for 10 seconds, after which we calculate the number of completed requests per second.
In a single thread in JMeter, a request is sent and the result is waited for before sending the next request. Thus, we guarantee that the number of requests on the server at any time will not exceed one, meaning requests are not run in parallel. This ensures we compare the efficiency of the language itself without the influence of the number of threads. The result of this test was as follows:
- JavaScript: 143.4 requests/second
- Python: 109.5 requests/second
- Alusus: 313.2 requests/second
We note from this result that Alusus was about twice as fast as JavaScript, and approximately three times faster than Python. Since requests are sent sequentially and not in parallel, this result indicates that the completion time for a single request in Alusus was half that of JavaScript, and one-third that of Python. That is, Alusus’s performance excelled not only in the number of users the server can handle, but also in response time.
Performance is an important factor, but it is not the only important one. Memory consumption is the other important factor, especially in the case of a server that may need to serve hundreds of users simultaneously or perhaps more. To measure memory consumption, we used the time command in Linux, which also allows calculating the maximum memory consumption (high water mark). This is done by running the server with the time command, as follows:
/bin/time -v <executable>
After running the server with the time command, we reran the same test using JMeter. Then, after the test finished, we stopped the server and read the results from the time command. The result that interests us is the highest RSS memory consumption. It was as follows for the three languages:
- JavaScript: 110.62 MB
- Python: 63.13 MB
- Alusus: 20.15 MB
Again, Alusus outperforms JavaScript and Python, and this is expected as it is a low-level language and relies on reference counting for memory management instead of garbage collection. It is worth noting that part of the memory consumption in the cases of JavaScript and Python is due to the interpreter itself, which does not apply to Alusus, which allows ahead-of-time (AOT) compilation. In this test, we used AOT compilation for the Alusus server. Some might think this comparison is unfair, but it is not; this is not a competition between the development teams of the three languages, but rather a comparison from the user’s perspective, who would be using AOT when building and publishing his Alusus based apps, especially since the default configuration of Alusus libraries uses AOT compilation in production. It is also worth noting that using Alusus in JIT mode gives identical performance to using it in AOT mode; the only difference here is the additional memory consumption imposed by the presence of the compiler itself.
In the second test, we increased the number of threads in JMeter to 8 instead of one, while keeping the execution time at 10 seconds. In this case, the server was subjected to 8 parallel requests, and the result was as follows:
- JavaScript: 152.3 requests/second
- Python: 116.4 requests/second
- Alusus: 1566.5 requests/second
Notice here the significant widening of the gap between Alusus and its competitors. This is primarily due to Alusus being multithreaded, which is not the case for JavaScript and Python. There are indirect methods in JavaScript, and perhaps Python as well, to run multiple threads on the server, relying on running multiple worker processes in parallel. This would certainly boost performance significantly, but Alusus would still be faster if the number of available threads matched. Additionally, running multiple processes would come at the cost of increased memory consumption. Speaking of memory, memory consumption in the case of 8 threads was as follows:
- JavaScript: 113.94 MB
- Python: 62.84 MB
- Alusus: 20.34 MB
With the increase in the number of threads, memory consumption did not change much. Alusus still leads by a large margin despite the increased thread count. If we wanted to run multiple copies of the JavaScript and Python servers to support a larger number of concurrent users, the memory consumption gap would widen even further than it is now. The strange thing in this result is that memory consumption in the case of Python was lower this time than in the single-thread case. I believe this is due to an error factor in reading the results, or some randomness in the timing of the garbage collector or the like.
User Interface Performance Comparison
The user interface performance comparison will depend on two things: memory consumption and the responsiveness of the user interface in non-trivial cases. Our interface is very simple because it’s just a table, but we want to simulate a more complex interface, and we also want to amplify the load on the interface to magnify the performance difference to a level easily noticeable using the performance monitoring tools provided by Firefox. Therefore, we will increase the data size by a factor of 8 by repeating it on the server side 8 times, i.e., returning 8 duplicate copies of the data instead of one. After loading the page, we will perform two operations in the user interface and measure the interface’s load and responsiveness during these these two operations. The first operation is adding a new record to the table, and the second operation is toggling a field value in the table.
To measure performance, we followed these steps:
- We opened the Performance tab in Firefox Developer Tools, then clicked on Start recording.
- On our site, we entered a name in the second input field at the top of the page, so the field value became ‘Sarmad’, then we clicked the Add button.
- In the same input field, we added an equals sign (=) after the name, followed by a new name, so the field value became ‘Sarmad=Sarmad Abdullah’, then we clicked the Update button. This replaces the value before the equals sign with the value after it.
- In Firefox Developer Tools, we clicked the Capture recording button.
We repeated the above steps for both sites, and the results were as in the following two images. The first image shows the performance result for Alusus.
In the Alusus result, we note that most of the time (74%) was spent in the ‘Refresh Driver tick’ task, which are the operations that occur inside the Firefox engine to update the interface. We also note that the ‘relative memory at this time’ metric, which is the amount of memory increase compared to the start of recording, was 7.17 MB.
So how does the result for JavaScript + React compare? Let’s see:
Here we notice several things. The first thing that catches the eye in this screenshot is the presence of two red lines at the top, which we have indicated with green arrows. The red line on the timeline indicates the period during which event processing is delayed due to the browser being busy executing user code. In other words, this is the period when the site is frozen and does not respond to user input, which you do not find in the Alusus result. The second noteworthy thing is what we indicated with a blue arrow: that the majority of the processor time was spent executing user code, i.e., JavaScript code within the React library, whereas in the Alusus case, the majority was spent in the browser engine. The third thing to note, indicated with a yellow arrow, is the ‘relative memory at this time’ value, which here is 336 MB, compared to the 7.17 MB in the Alusus case!
What about total memory consumption? This can be measured using the Memory options (in the Memory tab) in Firefox Developer Tools by clicking the Take snapshot button to capture a memory usage snapshot and compare it between the two sites. For each site, we measured memory consumption in two stages: first immediately after loading the site, and second after performing the two operations mentioned above, i.e., adding a record, then toggling its value. The results were as follows:
| Tool | Description | Memory Consumption (MB) |
|---|---|---|
| Alusus | Upon site load | 468.98 |
| Alusus | After performing the two operations | 469.16 |
| JavaScript | Upon site load | 496.14 |
| JavaScript | After performing the two operations | 560.46 |
We note here that memory consumption in the Alusus case was initially slightly lower than JavaScript (468 vs. 496), which is not surprising since most consumption is due to the relatively large data set, and the data is identical in both cases. However, the situation differed after performing the two operations. In the Alusus case, consumption did not increase much (468.98 vs. 469.16), which is expected, as the two operations are simple and do not require significant memory consumption. In contrast, in the JavaScript and React case, consumption increased from 496.14 to 560.46, an increase of approximately 64 MB, which is a large increase relative to the simplicity of this operation.
As was the case with the server code, Alusus also prevailed in the user interface, in both metrics: performance and memory consumption. It is worth noting here that the performance difference in the UI is not only due to the difference between the two languages, but also to the difference between the two user interface libraries. The React library relies on a complex and resource-heavy design, while the WebPlatform library used by Alusus follows a design that gives consideration to performance, simplicity, and resource consumption.
Conclusion
Although these comparisons are not comprehensive, they provide a clear glimpse into the benefit that the Alusus language offers in terms of performance and resource consumption. In this era of rising semiconductor prices, we find that Alusus offers an important advantage to programmers, helping them reduce costs by minimizing the required resources such as memory and processor speed. Alusus not only delivers impressive performance, but it also provides this performance accompanied by ease of use that rivals that of high-level languages. In fact, I don’t think we are exaggerating by saying that Alusus has surpassed JavaScript and Python in terms of ease and clarity as well, at least in some areas.
Stay tuned for future articles where we will discuss more aspects that Alusus handles correctly and perhaps uniquely.








