# seroperson's website My personal blog with articles and thoughts on various software development topics: Scala, Nix, Jekyll, Bridgetown, JVM things and so on. ## About This is the personal website and blog of seroperson. - Website: https://seroperson.me/ - Email: seroperson@gmail.com - GitHub: https://github.com/seroperson - LinkedIn: https://www.linkedin.com/in/seroperson - Mastodon: https://mastodon.social/@seroperson - X/Twitter: https://x.com/seroperson - Bluesky: https://bsky.app/profile/seroperson.bsky.social - Telegram: https://t.me/seroperson - Telegram Channel: https://t.me/seroperson_me ## Content This site contains technical articles and notes about software development, focusing on: - JVM technologies (Scala, Java, Kotlin) - Nix and NixOS - Static site generators (Jekyll, Bridgetown) - Web development - DevOps and tooling - Game modding and reverse engineering - AI and LLM integration ## All Articles # Article: [Live Reloading on JVM](/2025/11/28/jvm-live-reload/) Here, I want to summarize everything I know about hot and live reloading on the JVM and then show how I ended up implementing a universal live-reloading solution for any web application on the JVM. In short, in this article: - We'll try to answer what types of reloading we know. - Then we'll take an extended overview of existing reloading solutions on the JVM. - Finally, we'll look into the details of implementing the universal framework-agnostic live-reloading solution. <!--more--> <%= render "alert", type: "info", message: "**TL;DR**: See **[♾️ seroperson/jvm-live-reload](https://github.com/seroperson/jvm-live-reload)** repository." %> <%= toc %> # Introduction I've been missing runtime reloading every time I code in Scala. If we're talking about typelevel / ZIO stacks, it doesn't exist there at all. The only framework that has it in the Scala ecosystem is [Play][37]; everything else is more related to Java or Kotlin. But the situation isn't much better for either Java or Kotlin if you're not using major web frameworks but some small web libraries. Although there are some solutions, all of them have significant cons. But before diving in, let's define what exactly we're talking about. As many of you probably know, reloading is a mechanism that swaps code at runtime and allows your application to run with new code without explicitly stopping and then starting the whole process. There are numerous ways to do it, so let's enumerate them. We won't cover such horrible things as reloading an application running in production, although some solutions I'll describe next are probably suitable for this. So, definitions. Actually, there are no strong definitions of existing types of code reloading, but from the information on the internet, we can define the following types of runtime reloading: - **Hot Module Replacement (HMR)** - replacing some specific modules of an application at runtime. It means that some specific module unloads and then loads into a running application with new code. That's what OSGi is. - **Live Reloading** - patching runtime with the whole new application code with a "partial" restart. In the JVM world, it would mean that during reloading, we will only re-create the application classloader without touching the JVM itself and system classes. - **Hot Reloading** - patching only changed pieces of code without any restart. This is the most advanced technique, and it almost has no suitable implementations on the JVM (we'll talk about it in detail later). It means that we'll only upload changed bytecode directly to a runtime, and no classloader magic is necessary. # Do we really need reloading in development mode? It sounds crazy, but I really saw people saying things like "incremental compiler is fast enough," "just split your code better," "if something compiles too long, your project is misconfigured," "just start build in continuous mode." Although such people exist, there are also many people who really miss the reloading feature in modern web server solutions: - [Support for autoload][3] (Nov 25, 2016) - [How to "reload like Play"][4] (Jan 12, 2017) - [Reloading like Play][2] (Apr 28, 2018) - [Automatic reload for dev-server][31] (Nov 11, 2020) - [Hot reload possible?][6] (2024) - [Interactive / hot reload / live reload workflows][7] (Apr 25, 2025) - [Play like dev mode][5] (Apr 30, 2025) - (and probably more) Besides people's wondering, many major web frameworks (like Spring, Play, Quarkus, and others) already support reloading out-of-the-box, and it once more proves that any sane reloading is faster than nothing. # Which reloading solutions are available on the JVM? But which solutions do we have right now in the JVM world (and Scala specifically)? Let's sum up everything I found during my research and discuss every approach. [You can skip the speech][30] and jump to the summary. ## sbt-revolver and "Triggered execution" When it comes to hot-reloading discussions in Scala, [sbt-revolver][36] is the first thing everybody mentions. However, it's not even any reloading at all. What this plugin does is watch for your sources and restart the whole application process when they change. I mean, it just stops the JVM and starts it again. No runtime patching is involved. Here also comes the built-in `~` sbt feature (so-called [Triggered execution][1]) and other analogues, which do completely the same. It's completely replaceable with a simple shell script, I guess, and in practice, it just wastes your CPU resources continuously recompiling and restarting after every change. It's rarely usable, and that's definitely not what we're looking for. ## OSGi [OSGi][35] is the thing that's also mentioned quite often in hot-reloading discussions. It allows you to use the HMR approach, but your application has to be written in a very specific way using the OSGi framework, which is not what we're looking for, especially if we want it only for development purposes. ## Java Instrumentation API [Java Instrumentation API][25] allows you to attach some "observer" to an application "for the purpose of gathering data to be utilized by tools". This API also contains methods for hot-reloading, but they are limited only to method bodies, which makes it completely unsuitable for our purposes (quote from [javadoc][29]): > The redefinition may change method bodies, the constant pool, and attributes. > The redefinition must not add, remove, or rename fields or methods, change the > signatures of methods, or change inheritance. There are more APIs with similar functionality ([JDI][10], [JVMTI][28]), but all of them share the same limitations. Actually they look like the same API, and probably they really are. In short, these APIs provide true hot-reloading, but they're limited to patching method bodies, so they're not usable too much in practice. ## Custom Virtual Machine The [Dynamic Code Evolution Virtual Machine][12] (DCEVM) is a modification of the Java HotSpot VM that allows unlimited redefinition of loaded classes at runtime (in other words, implements hot-reloading). The initial project is dead, but it was implemented as part of other projects like [JetBrains Runtime][13]. There is also a [HotswapAgent][14] project, which enables you to actually use it hassle-free (almost). This technology allows you to get rid of previously mentioned limitations and swap basically everything, but it too has some drawbacks: - The most noticeable one is that you actually have to run some specific JVM: [JBR][17] (Java 21, Java 17), [TravaJDK][16] (Java 11), [DCEVM][15] (Java 8). - Configuration may be tricky, and you'll probably have to implement [custom plugins][18] to make it work with your stack. It's worth noting another solution, [Espresso VM][19], which has the same capabilities as DCEVM and still the same drawback, as it requires you to run a specific VM. However, [it also has some additional pros][20], which come from its tight integration with the Graal ecosystem. It sounds very promising and may be the best choice, but sticking to a specific JVM implementation isn't what we're looking for. ## JRebel There is also a commercial solution [JRebel][21], which doesn't require you to run a specific VM and still provides true hot-reloading. The information on how it works exactly is limited; [everything that's said][22] is that it uses the [Java Instrumentation API][38], but how exactly they bypass restrictions isn't clear. There are some assumptions, but anyway, this solution is paid, so that's not what we're looking for. ## Dynamic Proxy API In short, [this API][26] allows you to intercept class interactions and wrap them with custom logic. Technically, we can watch for file changes to recompile it and then implement a proxy, which changes its destination every time a file changes and a new loaded class is available. That's how the [scf37/hottie][27] library works. Sounds like a good idea, but it has a very limited number of applications as it isn't scalable on the whole codebase and forbids complex changes. In a nutshell, that's the same "classloaders" approach. ## Using separate classloaders This approach is the most common in the JVM world, and actually, that's what we call live reloading. The idea is that you have two classloaders: - The unreloadable one, which contains third-party jars and everything else you can't reload. - And reloadable, which contains your frequently-changed code, usually all the code you have in your `/src` directory. When a code change occurs, an application stops, the "reloadable" classloader is thrown away, and an application starts again with a new classloader created using changed sources. That's the way [Play][8], [Spring Boot][9], [Quarkus][23], [Apache Tapestry][24] reloaders work. It's a relatively simple yet working solution. It has some drawbacks, but usually, you won't even notice them if you code using mentioned above frameworks: - The reloadable classloader could be big, so it might not be as fast as more advanced solutions. - When you drop a classloader, the old instances of changed code are still in charge, and they won't be reloaded. If we're talking about Scala, probably it won't be a big problem as everything is usually stateless, but there are still such things as connection pools that won't go anywhere. If your application isn't completely stateless, memory leaks are possible, so you must take special care of stopping and cleaning existing resources to not allow leaks and undefined behavior. - It can conflict with libraries that mess with classloaders. - It's only available as part of a specific framework. It could be our choice, if we hadn't to stick to a specific web framework (spoiler: we don't have to anymore). # BLUF Well, if we drop the duplicates and non-reloading solutions, all we have is: - Hot-Reloading using the Java Instrumentation API, but it restricts you from changing the class schema. - Hot-Reloading using a custom VM, but it sticks you to a specific VM. - Hot Module Replacement using OSGi, but it sticks you to OSGi. - Hot-Reloading using JRebel, but it's a commercial solution. - Live-Reloading using several classloaders, but it's (yet) only available as part of specific web frameworks. In short, there is no good solution 🤡 # Implementing an universal solution That's when the idea of a universal framework-agnostic reloader was born, and when the **[♾️ seroperson/jvm-live-reload][32]** project was started. To try it right now, you can jump right to the [Installation][33] section in the repository. ![Preview](/images/2025-11-28-reloading/preview.gif) This project implements a Play-like approach, which we know as the "ClassLoader replacement" approach from the categorization above. For those who don't know Play, here's how it works in detail: - The user sets up a plugin for the build system (`gradle`, `sbt`, and others). A plugin implements all the reloading logic and provides the communication bridge between your application and the build system. - The user starts an application in development mode. - When a request arrives at a Play web server, it asks the build system whether there were any new code changes. - If there were any changes, then the build system compiles them, and a plugin reloads application classloaders. - If not, everything proceeds as always. So, you code as much as you want, then do a curl request, and, while request is waiting, everything compiles + reloads, and then responses using a new code. No CPU waste, no JVM restarts, and everything is fast enough. And nothing is stopping us from making this solution universal. Everything we needed is: - Re-implement "application -> build system" communication logic for each build system. - Implement a web proxy, which will stand in front of our application to decide whether to reload it or not. It allows us to leave application code almost untouched, so you can just apply the plugin and nothing more. - Also, we need to be able to start and stop the application programmatically and be able to release all its resources. There are no problems with starting, as basically, it's the `static void main` call, but stopping and resource cleanup could be tricky to implement universally. We'll discuss it further. Everything was implemented for Gradle, SBT, and mill build systems, and it even seems to be working. Let's see which difficulties I encountered and how they were solved. ## Necessary tradeoffs I wanted this to be a solution that doesn't require you to make any additional changes to an application's code - just apply the plugin and go. Unfortunately, it didn't happen, but still, you won't be forced to change a lot of things to make this work. Here is the list of necessary changes you need to make your application live-reloading ready: - Implement a `/health` endpoint. It must respond successfully when your web application is ready to receive requests. It's necessary because without it, we can't know when an application is actually started and running. - Your application must be interruptible. Read the article **[⏹️ Making your JVM application interruptible][34]** for more details. Shortly, your `static void main` must handle `InterruptedException` by stopping your application and releasing all its' resources. It's required to be able to programmatically stop your application and ensure all its resources are cleaned. - Your `static void main` method must only finish when your application is completely stopped. When the method exits, we consider that the application is stopped, and we can start a new instance. Major web frameworks with live-reloading features have full control over an application's lifecycle, but our universal solution doesn't have such information, so we need all these requirements to be followed. ## Future plans I hope this project will find its place and help users improve their development experience. It's in the alpha stage right now, so the very first milestone is stabilization, and then, based on feedback, we'll form a roadmap with necessary missing features, which will be implemented until version `1.0`. Honestly, I have no idea what important features are missing right now, so I'm all yours. In the future, I also want to look into researching a true JRebel-like Hot Reloading solution, which seems pretty challenging but still possible (in the end, the JRebel guys did it). <!-- prettier-ignore-start --> [1]: https://www.scala-sbt.org/1.x/docs/Triggered-Execution.html [2]: https://github.com/http4s/http4s/issues/1817 [3]: https://github.com/http4s/http4s/issues/766 [4]: https://github.com/http4s/http4s/issues/849 [5]: https://github.com/zio/zio-http/issues/3474 [6]: https://www.reddit.com/r/scala/comments/1extryi/hot_reload_possible/ [7]: https://users.scala-lang.org/t/interactive-hot-reload-live-reload-workflows/10726 [8]: https://jto.github.io/articles/play_anatomy_part2_sbt/ [9]: https://docs.spring.io/spring-boot/reference/using/devtools.html [10]: https://docs.oracle.com/javase/8/docs/technotes/guides/jpda/enhancements1.4.html [11]: https://github.com/scalacenter/scala-debug-adapter [12]: https://ssw.jku.at/dcevm/ [13]: https://github.com/JetBrains/JetBrainsRuntime [14]: https://github.com/HotswapProjects/HotswapAgent [15]: https://github.com/dcevm/dcevm [16]: https://github.com/TravaOpenJDK/trava-jdk-11-dcevm [17]: https://github.com/JetBrains/JetBrainsRuntime [18]: https://hotswapagent.org/mydoc_custom_plugins.html [19]: https://www.graalvm.org/latest/reference-manual/espresso/hotswap/ [20]: https://www.graalvm.org/latest/reference-manual/espresso/ [21]: https://www.jrebel.com/ [22]: https://www.jrebel.com/jrebel/learn/faq [23]: https://quarkus.io/guides/class-loading-reference [24]: https://tapestry.apache.org/class-reloading.html [25]: https://docs.oracle.com/javase/8/docs/api/java/lang/instrument/Instrumentation.html [26]: https://docs.oracle.com/javase/8/docs/technotes/guides/reflection/proxy.html [27]: https://github.com/scf37/hottie [28]: https://docs.oracle.com/en/java/javase/21/docs/specs/jvmti.html#bci [29]: https://docs.oracle.com/javase/8/docs/api/java/lang/instrument/Instrumentation.html#redefineClasses-java.lang.instrument.ClassDefinition...- [30]: #bluf [31]: https://github.com/javalin/javalin/issues/1109 [32]: https://github.com/seroperson/jvm-live-reload [33]: https://github.com/seroperson/jvm-live-reload?tab=readme-ov-file#installation [34]: /2025/10/20/interrupting-jvm-application [35]: https://en.wikipedia.org/wiki/OSGi [36]: https://github.com/spray/sbt-revolver [37]: https://github.com/playframework/playframework [38]: #java-instrumentation-api <!-- prettier-ignore-end --> --- # Article: [Live Reloading на JVM](/ru/2025/11/28/jvm-live-reload/) В этой статье я хотел бы обобщить всё, что мы знаем про Hot/Live Reloading на JVM, и далее показать, как я пришел к реализации универсального Live Reloading решения для любых веб-приложений на JVM. Вкратце, в этой статье мы: - Попробуем сформулировать, какие виды релоадинга бывают. - Подробно рассмотрим, какие реализации существуют на JVM. - И немножко обсудим детали реализации универсального решения, и что вообще к нему привело. <!--more--> <%= render "alert", type: "info", message: "**TL;DR**: Репозиторий **[♾️ seroperson/jvm-live-reload](https://github.com/seroperson/jvm-live-reload)**." %> <%= toc %> # Вступление Все время, которое я разрабатываю веб-сервисы на Scala, мне не хватало вот этой фичи с Live Reloading. Если мы говорим про typelevel / ZIO стек, то такого там нет от слова совсем. Да что уж говорить, в Scala вообще никакого релоадинга нигде нет, кроме [фреймворка Play][40]. Всё остальное, где оно есть, в том или ином виде больше относится к Java или Kotlin экосистемам. Но ситуация не лучше в Java и Kotlin, если вы не используете какие-то супер-мажорные веб-фрейморки, а сидите на каких-то маленьких либах типа [ktor][37], [javalin][38], [http4k][39] и так далее. Если сильно хочется, конечно, есть способы. Но, спойлер, все они выглядят не очень. Перед тем как рассмотреть эти самые способы, давайте вообще определимся, о чем конкретно мы тут говорим. Наверное, многие и так догадываются, что Hot/Live Reloading, это механизм замены кода в рантайме без явного рестарта приложения. Обычно это очень быстро и практически всегда это сильно быстрее, чем ручками всё стопать и заново стартовать. Это может быть реализовано разными путями, поэтому давайте перечислим основные вариации, к которым можно отнести тот или иной релоадинг. И, кстати, мы здесь если что говорим именно про релоадинг в процессе разработки, и не будем затрагивать такие вещи как релоадинг в проде, хотя и некоторые методы, про которые будет идти речь, могут быть применены и там. Так вот, определения. На самом деле, я не нашел прям строгих каких-то определений, но на основе информации из интернета, можно выделить такие типы релоадинга в рантайме: - **Hot Module Replacement (HMR)** - релоад только определенной части приложения в рантайме. Это буквально значит, что приложение строго разделено на модули, которые можно динамически выгружать и загружать. Этот подход возможен с использованием, например, фреймворков OSGi. - **Live Reloading** - патчинг рантайма новым кодом с частичным рестартом. Обычно работает так, что приложение как бы по факту рестартуется, но не полностью, а только юзер-кодом. Загруженные библиотеки, всякие системные штучки и т.д. остаются при этом нетронутыми. - **Hot Reloading** - ну и самая advanced механика, это патчинг только прям вот измененного юзер-кода без рестарта вообще. То есть по факту просто патчится в рантайме работающий код, который на следующей итерации просто будет работать уже по новому. На JVM нормальной реализации релоадинга этого типа по сути нет (мы поговорим об этом далее). # Нам вообще нужен релоадинг при разработке? Может прозвучать немножко кринжово, но в дискуссиях об отсутствии в том или фреймворке релоадинга, можно часто встретить фразы "да компиляторы сейчас и так быстрые", "просто разбей код получше", "если что-то долго собирается, то это проблема в проекте", "просто запусти билд в continuous моде" и другие. Одновременно можно увидеть, что спрос на эту фичу просто огромен: - [Support for autoload][3] (Nov 25, 2016) - [How to "reload like Play"][4] (Jan 12, 2017) - [Reloading like Play][2] (Apr 28, 2018) - [Automatic reload for dev-server][31] (Nov 11, 2020) - [Hot reload possible?][6] (2024) - [Interactive / hot reload / live reload workflows][7] (Apr 25, 2025) - [Play like dev mode][5] (Apr 30, 2025) - (и можно найти еще больше примеров) Помимо того, что людям это необходимо, многие мажорные веб-фреймворки (как Spring, Play, Quarkus и другие) уже давно изкоробки поддерживают эту фичу, и всё это в очередной раз доказывает, что релоадинг нужен и что его наличие явно лучше, чем его отсутствие. # Реализации релоадинга на JVM Так какие реализации уже есть на JVM (и на Scala, в частности)? Давайте сделаем краткий обзор того, что я нашел во время ресерча. [Вы можете сразу скипнуть скучную часть][30] и перейти к саммари. ## sbt-revolver и "Triggered execution" Всегда, когда заходит речь за релоадинг в Scala, [sbt-revolver][36], это первое, что люди упоминают как "решение". Но по факту это не релоадинг в принципе. Что делает этот плагин, так это просто прям рестартует весь процесс как только изменились какие-то его сорцы на диске. То есть прям стопает JVM и стартует с нуля заново. Никакого патчинга рантайма тут нет. Сюда же можно отнести и встроенную фичу `~` в sbt (так называемое [Triggered execution][1]), которая делает по сути то же самое. Ну и сюда же всякие "Continuous mode" и их аналоги в других системах сборки. Всё это можно заменить одним шелл-скриптом и на практике оно просто сжигает ресурсы процессора, постоянно ребутая ваше приложение пока вы кодите. Это редко когда полезно и явно не то, что мы ищем. ## OSGi [OSGi][35] тоже часто упоминается как "решение" проблем релоадинга. Как я писал ранее, с ним можно применять подход HRM, но, собственно, проблема в том, что для этого необходимо писать приложение с использованием OSGi фреймворка. Учитывая, что это не просто "взял и добавил библиотечку", и то, что нам это надо просто для релоадинга в процессе разработке, этот вариант нам не подходит. ## Java Instrumentation API [Java Instrumentation API][25] позволяет подцепиться "наблюдателем" к приложению для "for the purpose of gathering data to be utilized by tools". Это API также содержит методы для hot релоадинга, но функциональность ограничена только изменением тел методов. Если быть более точным, то нельзя менять схему существующих классов (цитата из [javadoc][29]): > The redefinition may change method bodies, the constant pool, and attributes. > The redefinition must not add, remove, or rename fields or methods, change the > signatures of methods, or change inheritance. Существует ещё больше всяких JVM-ных API с похожей функциональностью ([JDI][10], [JVMTI][28]), но все они имеют те же самые ограничения. Возможно, это всё просто одно и то же API под капотом. Всё это предоставляет возможность делать тру hot релоадинг, но с такими ограничениями сложно придумать этому применение на практике, кроме как в дебаггерах (где оно успешно и применяется). ## Кастомная VM [Dynamic Code Evolution Virtual Machine][12] (DCEVM) - это модификация Java HotSpot VM, которая позволяет переопределять загруженные классы в рантайме (другими словами, реализует hot релоадинг). Оригинальный проект мертв, но был реализован заново в рамках других VM, например, [JetBrains Runtime][13]. Также есть обёртка, позволяющая, собственно, применять этот самый hot релоадинг, проект [HotswapAgent][14]. Эта разработка реализует реальный hot релоадинг и позволяет релоадить любые изменения, но тоже не лишена недостатков: - Самый очевидный, это то, что нужно, собственно, использовать кастомную JVM: [JBR][17] (Java 21, Java 17), [TravaJDK][16] (Java 11), [DCEVM][15] (Java 8). - Корректно это всё сконфигурировать может быть не так просто и, возможно, придется реализовывать [кастомные плагины][18], чтобы всё это по итогу завелось на вашем стеке. Стоит также отметить, что в мире кастомных VM с hot релоадингом, есть ещё одно решение, [Espresso VM][19], которое имеет все те же плюсы и минусы, но в довесок имеет [более тесную интеграцию с Graal-экосистемой][20]. Всё это очень классно выглядит, но заставлять пользователей юзать кастомные VM звучит немножко кринжово, поэтому мы ищем решение дальше. ## JRebel Существует также, внезапно, платное решение [JRebel][21], которое не требует ранить кастомную VM, но при этом всё ещё реализует тру hot релоадинг. Информации о том, как оно работает, практически нет, [всё, что известно][22], что они используют [Java Instrumentation API][41], но как именно они патчат в обход ограничений, непонятно. В любом случае, оно платное, а платное мы не любим, поэтому ищем дальше. ## Dynamic Proxy API Если вкратце, то [это API][26] позволяет перехватывать взаимодействие с классом и оборачивать его в какую-то кастомную логику. Теоретически мы можем поллить изменения файлов, компилировать их при нужде, и, реализовав прокси, ходить либо в старую реализацию, либо в новую, загруженную в новом класс-лоадере. Примерно так работает никому неизвестная библиотека [scf37/hottie][27]. Звучит классно, но как будто тяжело будет распространить это на все классы юзер-кода, и чтобы это работало без аффекта производительности. И, опять же, непонятно как быть с какими-то сложными изменениями. Ну, короче, явно не наш вариант. Если смотреть глобально, то по сути это тот же подход с класс-лоадерами. ## Подход с класс-лоадерами И, наконец, самое популярное решение в JVM-мире и то, что мы называем Live Reloading. Идея состоит в том, что есть, значит, два класс-лоадера: - Перманентный, в котором лежат зависимости и системные классы, который не релоадится. - Обновляемый, в котором лежит юзер-код, который релоадится. Когда вносятся правки в код, приложение останавливается, обновляемый класс-лоадер дропается и на новом коде создается еще один, на котором приложение запускается снова. Так работают [Play][8], [Spring Boot][9], [Quarkus][23], [Apache Tapestry][24]. Это относительно простое, но самое рабочее решение. У него всё равно есть минусы, но скорее всего вы их даже не заметите: - Обновляемый класс-лоадер может быть достаточно большим, поэтому релоад может быть не прям супер-быстрым (но всё равно это будет в разы быстрее, чем без релоада). - Когда обновляемый класс-лоадер дропается, очень важно корректно остановить и почистить все используемые ресурсы. Если что-то будет упущено, то память потечет и через пару релоадов всё упадет с out-of-memory (или будет какой-то еще undefined behavior). - Этот метод требует особого внимания к библиотекам, которые используют всякую класс-лоадерную магию. - Этот метод обычно доступен только как часть каких-то веб-фреймворков. Поэтому, конечно, это могло бы быть подходящим нам решением, если бы не приходилось ради этого приложение на конкретных веб-фреймворках (спойлер: больше не придется). # Итого Итого, если исключить дубликаты, то останется следующее: - Hot Reloading с использованием Java Instrumentation API, но этот метод не позволяет менять схему классов. - Hot Reloading с использованием кастомной VM, но заставлять пользователей юзать кастомную VM для релоадинга звучит как немножко кринж. - HMR с использованием OSGi, но строить приложение на OSGi только ради релоадинга в процессе разработки тоже звучит как немножко кринж. - Hot Reloading с использованием JRebel, но он стоит примерно половину зарплаты. - Live Reloading с использованием нескольких класс-лоадеров, но это решение доступно только как часть конкретных веб-фреймворков. Короче, нормальных решений нет 🤡 # Реализация универсального решения Вот где-то на этом этапе у меня появилась идея реализации независимого от фреймворка релоадера, и где-то здесь родился проект **[♾️ seroperson/jvm-live-reload][32]**. Чтобы попробовать его прямо сейчас, можете сразу перейти в секцию [Installation][33] в репозитории. ![Preview](/images/2025-11-28-reloading/preview.gif) Этот проект реализует Play-like подход, который мы также частично знаем как описанный выше подход с класс-лоадерами. А для тех, кто не знаком с Play, вот как он работает в деталях: - Пользователь устанавливает плагин для своей системы сборки (Gradle, sbt и другие). Плагин реализует всю логику релоада и предоставляет собой "мост" для коммуникации между приложением и системой сборки. - Пользователь запускает приложение в dev-режиме. - Когда запрос прилетает в Play, фреймворк спрашивает систему сборки, были ли какие-то изменения в коде с момента последнего релоада (или с момента старта приложения). - Если в коде были изменения, то они компилируются, а плагин релоадид приложение с новым класс-лоадером. - Если изменений не было, то запрос проходит как обычно. Таким образом, вы разрабатываете сколько надо, потом делаете `curl`-запрос, всё компилируется и релоадится (запрос всё это время просто висит в ожидании), и респонз на этот запрос уже будет сформулирован на новом коде. Никаких рестартов впустую, никаких рестартов всего процесса, и всё обычно очень быстро. И, как вы поняли, ничего не мешает нам сделать это решение универсальным. Всё, что нужно, это: - Ре-реализовать логику взаимодействия "приложение -> система сборки" для каждой системы сборки. - Реализовать прокси, которая будет стоять перед приложением и решать, релоадить код или нет. Это позволит нам практически не трогать юзер-код при использовании плагина, так что в общем случае можно будет просто добавить его в билд и всё будет работать. - Также нам надо будет уметь стартовать и останавливать приложение программно, а еще нужны хоть какие-то гарантии, что все используемые ресурсы освобождены. Со стартом приложения проблем не будет - это решается просто вызовом `static void main` через рефлексию. А со всем остальным есть небольшие проблемы. Я это опишу чуть далее. Всё это было реализовано для систем сборки Gradle, sbt и mill, и оно даже работает. Конечно, пока это наверняка очень сыро, но, думаю, мы всё отдебажим и пофиксим. Давайте расскажу, с какими сложностями я столкнулся и как их решил. ## Необходимые компромиссы Я хотел закодить это всё так, чтобы для работы плагинов не требовалось править вообще ничего - чтобы просто применил плагин и всё работало изкоробки. Но мир неидеален, поэтому в некоторых случаях придётся что-то подправить. Но в любом случае правок не должно быть много. Итак, чтобы приложение было "live reloading ready", нужно чтобы оно соответствовало следующим требованиям: - Веб-приложение должно иметь `/health` роут. Он должен отвечать с кодом 2xx, когда приложение готово отвечать на запросы. Это необходимо, потому что без этого плагин не знает, когда приложение действительно стартануло и ранится. - Веб-приложение должно быть "interruptible". Можно прочитать мою статью **[⏹️ Making your JVM application interruptible][34]**, чтобы узнать больше деталей. Если вкратце, то ваш `static void main` метод должен обрабатывать `InterruptedException`, останавливая приложение и освобождая все ресурсы. Это необходимо, чтобы, внезапно, плагин мог при нужде остановить приложение. - Метод `static void main` должен быть блокирующим и завершаться только тогда, когда приложение полностью остановлено и все ресурсы освобождены. Когда мы вышли из метода, плагин считает, что приложение остановлено и можно стартануть новый инстанс. Все веб-фреймворки, которые реализуют релоадинг с таким подходом, имеют полный контроль над жизненным циклом приложения и вообще над всем, но наше универсальное решение так не умеет, отсюда и появились такие вот требования. ## Планы на будущее Я надеюсь, что этот проект найдет свое место и поможет пользователям в разработке. Как я уже сказал, это сейчас типа альфа, поэтому самой первой целью для меня является стабилизация, а потом, на основе фидбэка, мы сформируем роадмап с фичами до версии `1.0`. Честно, понятия пока не имею, чего прям не хватает прямо сейчас, поэтому я весь во внимании. # Заключение И подписывайтесь на мой <a href="<%= site.metadata.author.tg_channel_link %>">🛫 Telegram-канал</a>, кстати. Там я размещаю всякие мысли про повседневную разработку, которые не подходят под формат блога. <!-- prettier-ignore-start --> [1]: https://www.scala-sbt.org/1.x/docs/Triggered-Execution.html [2]: https://github.com/http4s/http4s/issues/1817 [3]: https://github.com/http4s/http4s/issues/766 [4]: https://github.com/http4s/http4s/issues/849 [5]: https://github.com/zio/zio-http/issues/3474 [6]: https://www.reddit.com/r/scala/comments/1extryi/hot_reload_possible/ [7]: https://users.scala-lang.org/t/interactive-hot-reload-live-reload-workflows/10726 [8]: https://jto.github.io/articles/play_anatomy_part2_sbt/ [9]: https://docs.spring.io/spring-boot/reference/using/devtools.html [10]: https://docs.oracle.com/javase/8/docs/technotes/guides/jpda/enhancements1.4.html [11]: https://github.com/scalacenter/scala-debug-adapter [12]: https://ssw.jku.at/dcevm/ [13]: https://github.com/JetBrains/JetBrainsRuntime [14]: https://github.com/HotswapProjects/HotswapAgent [15]: https://github.com/dcevm/dcevm [16]: https://github.com/TravaOpenJDK/trava-jdk-11-dcevm [17]: https://github.com/JetBrains/JetBrainsRuntime [18]: https://hotswapagent.org/mydoc_custom_plugins.html [19]: https://www.graalvm.org/latest/reference-manual/espresso/hotswap/ [20]: https://www.graalvm.org/latest/reference-manual/espresso/ [21]: https://www.jrebel.com/ [22]: https://www.jrebel.com/jrebel/learn/faq [23]: https://quarkus.io/guides/class-loading-reference [24]: https://tapestry.apache.org/class-reloading.html [25]: https://docs.oracle.com/javase/8/docs/api/java/lang/instrument/Instrumentation.html [26]: https://docs.oracle.com/javase/8/docs/technotes/guides/reflection/proxy.html [27]: https://github.com/scf37/hottie [28]: https://docs.oracle.com/en/java/javase/21/docs/specs/jvmti.html#bci [29]: https://docs.oracle.com/javase/8/docs/api/java/lang/instrument/Instrumentation.html#redefineClasses-java.lang.instrument.ClassDefinition...- [30]: #итого [31]: https://github.com/javalin/javalin/issues/1109 [32]: https://github.com/seroperson/jvm-live-reload [33]: https://github.com/seroperson/jvm-live-reload?tab=readme-ov-file#installation [34]: /2025/10/20/interrupting-jvm-application [35]: https://en.wikipedia.org/wiki/OSGi [36]: https://github.com/spray/sbt-revolver [37]: https://github.com/ktorio/ktor [38]: https://github.com/javalin/javalin [39]: https://github.com/http4k/http4k [40]: https://github.com/playframework/playframework [41]: #java-instrumentation-api <!-- prettier-ignore-end --> --- # Article: [Making your JVM application interruptible](/2025/10/20/interrupting-jvm-application/) Today we'll discuss how to make your JVM application interruptible, what exactly it means, why you would need it, and how it can improve your development experience. <!--more--> <%= toc %> # How we run JVM applications Before we start, to better dive into the context of the problem, let's see how we can run our applications. In general, there are at least two ways to run a JVM application during development. Let's compare them. ## Standalone Java process This is the case when an application starts in its own Java process. This is what happens when we set `fork := true` in sbt, how Gradle runs your application, and how it runs in production. When `fork` flag is set, running `sbt run` starts your application in a completely separate process, which is not related to sbt in any way. A notable downside of this approach is that it makes everything a little bit slower because the JVM starts from scratch, loads all the classes, nothing is JIT-compiled yet, and so on. But the advantage is that it allows you to [handle OS signals][1], like `SIGTERM`, `SIGINT`, `SIGUSR2`, and others. Catching these signals is what frameworks do for you, and they rely on them to initiate shutdown logic, which is usually implemented using [Runtime.addShutdownHook(Thread)][2]. In a common scenario, frameworks set shutdown hooks that are triggered when the JVM process stops, for example, when a container stops in production or when you press `CTRL + C` to cancel a forked run. As an additional advantage, running an application in a separate process better shows how it would run in production. ## Run within existing JVM process As opposed to running an application in a separate process, during development you can also run it in an already existing JVM. This is the default behavior for `sbt run` and it makes everything a little bit faster because you don't need to start another JVM, system classes are already loaded, many optimizations are already applied, and so on. ### Problems of running uninterruptible applications within existing JVM Firstly, as you might have guessed, you can no longer catch OS signals, and shutdown hooks won't be called either. When you press `CTRL + C`, sbt (or any other tool which wraps your application) throws `java.lang.InterruptedException` in your application's main thread and expects it to stop. If your application or framework doesn't handle it correctly, you may encounter memory leaks, unreleased ports, strange behavior in subsequent runs, or perhaps your `CTRL + C` will be completely ignored, among other unpleasant issues. It's not a rare case. There is even an open issue in sbt where people advocate for [making "fork := true" the default][3] because their application doesn't work well in a non-forked JVM. This mostly happens because their framework (or perhaps the application itself) incorrectly handles `InterruptedException`, which is clearly a bug. You can see quotes such as: > **[@jrudolph][4]** said: I have to say, that adding `fork in run := true` to > an sbt project is one of the first things I do in many cases. > **[@AugustNagro][5]** said: I just spent 30 minutes trying to figure out why a > Vertx endpoint was not being updated. > **[@altrack][6]** said: From my team's experience we had more issue than gains > from having `fork := false`, and eventually converted all dozens repos to > `fork := true`. Usually we had some strange startup/shutdown errors/issues > that were resolved by adding `fork =: true`. Such issues are possible not only with the sbt runner but also with any other tool that "wraps" your application and restarts it within a single JVM process. # What exactly does "make an application interruptible" mean? There is no strict definition, I guess, so I'll form it freely. It means the following: - Make your application handle `java.lang.InterruptedException` by initiating shutdown and releasing all resources. - Make your `static void main` method run all the time while an application is alive. Nowadays, it's rare for people to make their applications and frameworks interruptible, probably because it doesn't affect production and usually doesn't heavily affect the development process. # Why do I need it then and why not run in a forked JVM? You need it if your application is managed by some tool (or call it runner) and runs in an existing tool's JVM. It could be your build system, IDE, or some other tool that is responsible for running an application for some reason. For example, it's necessary if you're using **[♾️ Live Reloading on JVM][7]**. If that's your case, then you need your application to be interruptible. If not, feel free to run an application in a forked JVM. # How to make an application interruptible? Well, there is no universal solution, and every case must be investigated individually, but the very basic example is: ```java // Before public class Application { public static void main(String[] args) { new MyWebServer("localhost", 8080).start(); } } // After public class Application { public static void main(String[] args) { var server = new MyWebServer("localhost", 8080); try { server.start(); Thread.currentThread().join(); } catch (InterruptedException ex) { server.close(); } } } ``` The general idea is to **shut down everything** if you catch `InterruptedException` - that's it. And don't forget to ensure that `main` method is blocking and runs all the time while an application is alive. <!-- prettier-ignore-start --> [1]: https://docs.oracle.com/en/java/javase/21/troubleshoot/handle-signals-and-exceptions.html#GUID-57C048F6-0D4B-43BD-B27C-06A613435360 [2]: https://docs.oracle.com/en/java/javase/21/docs/api/java.base/java/lang/Runtime.html#addShutdownHook(java.lang.Thread) [3]: https://github.com/sbt/sbt/issues/6413 [4]: https://github.com/jrudolph [5]: https://github.com/AugustNagro [6]: https://github.com/altrack [7]: /2025/11/28/jvm-live-reload <!-- prettier-ignore-end --> --- # Article: [Implementing a JWT-based authorization for zio-http](/2025/09/03/zio-http-jwt-auth/) Recently I've published my `pac4j` wrapper for `zio-http`, [zio-http-pac4j][1]. For those who aren't familiar with the underlying technology, [pac4j][2] is a Java security framework which allows you to easily implement authorization and authentication mechanisms. Today I want to show how to implement JWT-based authorization using this library. <!--more--> <%= toc %> # Introduction Firstly, let's quickly remember what JWT is. As [RFC-7519][3] says: > **JSON Web Token (JWT)** is a compact, URL-safe means of representing claims > to be transferred between two parties. The claims in a JWT are encoded as a > JSON object that is used as the payload of a JSON Web Signature (JWS) > structure or as the plaintext of a JSON Web Encryption (JWE) structure, > enabling the claims to be digitally signed or integrity protected with a > Message Authentication Code (MAC) and/or encrypted. The very basic JWT token looks like: ``` eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiYWRtaW4iOnRydWUsImlhdCI6MTUxNjIzOTAyMn0.KMUFsIDTnFmyG3nMiGM6H9FNFUROf3wh7SmqJp-QV30 ``` Here you can see three parts split by dots. Each part is a base64-urlencoded string and respectively they are a header, payload and signature. To go deeper and find what exactly they are and what they can contain, it's better to refer to the RFC itself or look at some [good JWT introduction article][4]. Now let's jump right to the guide. # Implementing Basic Auth Now, imagine you have your `zio-http` API, which you want to protect using JWT tokens. There are numerous ways to configure everything, but we'll discuss the most common case: using basic auth to log in, and then using JWT token to access our protected API. Start with adding necessary dependencies: ```scala val pac4jVersion = "6.2.1" libraryDependencies ++= Seq( // pac4j + zio-http integration "me.seroperson" %% "zio-http-pac4j" % "0.1.1", // JWT-related pac4j classes "org.pac4j" % "pac4j-http" % pac4jVersion ) ``` `zio-http-pac4j` requires you to define security configuration via the special class `SecurityConfig`. Let's fill it with Basic Auth settings: ```scala import me.seroperson.zio.http.pac4j.config.SecurityConfig import org.pac4j.http.client.direct.DirectBasicAuthClient import org.pac4j.http.credentials.authenticator.test.SimpleTestUsernamePasswordAuthenticator val securityConfig = SecurityConfig( clients = List({ val directBasicAuthClient = new DirectBasicAuthClient( new SimpleTestUsernamePasswordAuthenticator() ) directBasicAuthClient }) ) ``` We used `SimpleTestUsernamePasswordAuthenticator` here, which allows all users that have `username == password`, so it should be only used for testing purposes. Let's now define our login endpoint, protected using Basic Auth. ```scala // ... import zio.ZIOAppDefault import zio.http._ import me.seroperson.zio.http.pac4j.Pac4jMiddleware import me.seroperson.zio.http.pac4j.ZioPac4jDefaults import me.seroperson.zio.http.pac4j.config.SecurityConfig import me.seroperson.zio.http.pac4j.session.InMemorySessionRepository object ZioApi extends ZIOAppDefault { val userRoutes = Routes( Method.GET / "jwt" -> handler { Response.ok } @@ Pac4jMiddleware.securityFilterUnit(clients = Some(List("DirectBasicAuthClient"))) ) override val run = for { _ <- Server .serve(userRoutes) .provide( Server.defaultWithPort(9000), ZioPac4jDefaults.live, InMemorySessionRepository.live, ZLayer.succeed { SecurityConfig(/* ... */) } ) } yield () } ``` Here we use `ZioPac4jDefaults.live` to provide the necessary `pac4j` classes and `InMemorySessionRepository.live` to provide storage for your sessions (be sure to implement some more reliable storage for production purposes). Let's start our server using `sbt run` and make a `curl` request to ensure everything works. ```text $ > curl -v -u admin:admin "http://localhost:9000/jwt" * Host localhost:9000 was resolved. * IPv6: ::1 * IPv4: 127.0.0.1 * Trying [::1]:9000... * Connected to localhost (::1) port 9000 * Server auth using Basic with user 'admin' > GET /jwt HTTP/1.1 > Host: localhost:9000 > Authorization: Basic YWRtaW46YWRtaW4= > User-Agent: curl/8.8.0 > Accept: */* > * Request completely sent off < HTTP/1.1 200 Ok < date: Tue, 02 Sep 2025 16:30:42 GMT < content-length: 0 < * Connection #0 to host localhost left intact ``` # Implementing JWT-based authorization Now let's add more configuration to implement JWT-based authorization. We'll leverage `ZLayer` power to reuse it later in endpoint's body: ```scala // ... import org.pac4j.http.client.direct.HeaderClient import org.pac4j.jwt.config.signature.SecretSignatureConfiguration import org.pac4j.jwt.config.signature.SignatureConfiguration import org.pac4j.jwt.credentials.authenticator.JwtAuthenticator object ZioApi extends ZIOAppDefault { val userRoutes = Routes( /* ... */ ) override val run = for { _ <- Server .serve(userRoutes) .provide( // ... ZLayer.succeed[SignatureConfiguration] { new SecretSignatureConfiguration( // 256-bits "VOAAvi(F2Wi9LiybnxNOJGSryxX58@;v@5Ciz5Cv~WQ|8_yh]ZAIhqDAYhZ3}r{" ) }, ZLayer.fromZIO { for { signatureConfig <- ZIO.service[SignatureConfiguration] } yield SecurityConfig(clients = List( { val jwtAuthenticator = new JwtAuthenticator() jwtAuthenticator.setSignatureConfiguration(signatureConfig) val headerClient = new HeaderClient( Header.Authorization.name, jwtAuthenticator ) headerClient }, /* ... */ )) } ) } yield () } ``` Here we've defined everything we need to protect our endpoint using JWT token which we'll pass using `Authorization` header. Our token will be signed using given secret, so nobody will be able to fake it unless he knows that secret. Now let's implement JWT-token generation in our `/jwt` endpoint: ```scala import org.pac4j.jwt.profile.JwtGenerator import java.util.Date val userRoutes = Routes( Method.GET / "jwt" -> handler { for { profile <- ZIO.service[UserProfile] signatureConfig <- ZIO.service[SignatureConfiguration] jwtGenerator = { val jwtGenerator = new JwtGenerator(signatureConfig) jwtGenerator } token = jwtGenerator.generate(profile) } yield Response.ok.copy(body = Body.fromCharSequence(token)) } @@[SignatureConfiguration] Pac4jMiddleware .securityFilter(clients = Some(List("DirectBasicAuthClient"))) ) ``` And, finally, let's implement some JWT-protected endpoint: ```scala val userRoutes = Routes( /* ... */ Method.GET / "protected" -> handler { for { profile <- ZIO.service[UserProfile] } yield Response.ok .copy(body = Body.fromCharSequence(profile.getUsername)) } @@ Pac4jMiddleware .securityFilter(clients = Some(List("HeaderClient"))) ) ``` Checking again that everything works: ```text $ > curl -v -u admin:admin "http://localhost:9000/jwt" * Host localhost:9000 was resolved. * IPv6: ::1 * IPv4: 127.0.0.1 * Trying [::1]:9000... * Connected to localhost (::1) port 9000 * Server auth using Basic with user 'admin' > GET /jwt HTTP/1.1 > Host: localhost:9000 > Authorization: Basic YWRtaW46YWRtaW4= > User-Agent: curl/8.8.0 > Accept: */* > * Request completely sent off < HTTP/1.1 200 Ok < date: Tue, 02 Sep 2025 19:27:21 GMT < content-length: 204 < * Connection #0 to host localhost left intact eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJvcmcucGFjNGouY29yZS5wcm9maWxlLkNvbW1vblByb2ZpbGUjYWRtaW4iLCIkaW50X3JvbGVzIjpbXSwiaWF0IjoxNzU2ODQxMjQxLCJ1c2VybmFtZSI6ImFkbWluIn0.irTIu88ah28i6gARa0iJwhk7SxpfMz481no2AT9di4A ``` Let's decode it using some JWT decoder just for curiosity. The header and payload will look like: ```json // Header { "alg": "HS256" } // Payload { "sub": "org.pac4j.core.profile.CommonProfile#admin", "$int_roles": [], "iat": 1756841241, "username": "admin" } ``` We can notice the following fields: - `sub` is actually optional, but filled with some internal `pac4j` value. - `iat` shows when this token was issued. - `$int_roles` can contain user's roles. We'll come back to it later. - `username` contains the username (surprising). Let's try to access our protected endpoint: ``` $ > curl -v -H "Authorization: eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJvcmcucGFjNGouY29yZS5wcm9maWxlLkNvbW1vblByb2ZpbGUjYWRtaW4iLCIkaW50X3JvbGVzIjpbXSwiaWF0IjoxNzU2ODQxMjQxLCJ1c2VybmFtZSI6ImFkbWluIn0.irTIu88ah28i6gARa0iJwhk7SxpfMz481no2AT9di4A" "http://localhost:9000/protected" * Host localhost:9000 was resolved. * IPv6: ::1 * IPv4: 127.0.0.1 * Trying [::1]:9000... * Connected to localhost (::1) port 9000 > GET /protected HTTP/1.1 > Host: localhost:9000 > User-Agent: curl/8.8.0 > Accept: */* > Authorization: eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJvcmcucGFjNGouY29yZS5wcm9maWxlLkNvbW1vblByb2ZpbGUjYWRtaW4iLCIkaW50X3JvbGVzIjpbXSwiaWF0IjoxNzU2ODQxMjQxLCJ1c2VybmFtZSI6ImFkbWluIn0.irTIu88ah28i6gARa0iJwhk7SxpfMz481no2AT9di4A > * Request completely sent off < HTTP/1.1 200 Ok < date: Tue, 02 Sep 2025 19:30:22 GMT < content-length: 5 < * Connection #0 to host localhost left intact admin ``` You can ensure by yourself that it won't work with invalid token. ## Expiration time Issuing tokens that never expire isn't best practice and in production you should probably ask users to rotate them sometimes. It's a good idea to set a lifetime for your token: ```scala jwtGenerator.setExpirationTime( // 1 minute new java.util.Date(java.lang.System.currentTimeMillis() + 1000L * 60) ) ``` Decode this token again and you'll see something like that: ```json { "sub": "org.pac4j.core.profile.CommonProfile#admin", "$int_roles": [], "exp": 1756842478, "iat": 1756842418, "username": "admin" } ``` ## Token encryption To not allow your users to check what your token contains, you can configure token encryption. It's possible to encrypt them using a secret or using key pair. Let's check both ways, starting with key pair. ### Using key pair Just for example we'll generate pair in runtime, but in production you should probably read it from some safe place: ```scala // ... import org.pac4j.jwt.config.encryption.EncryptionConfiguration import org.pac4j.jwt.config.encryption.ECEncryptionConfiguration import com.nimbusds.jose.JWEAlgorithm import com.nimbusds.jose.EncryptionMethod import java.security.KeyPairGenerator object ZioApi extends ZIOAppDefault { // ... override val run = for { _ <- Server .serve(userRoutes) .provide( /* ... */ ZLayer.succeed[EncryptionConfiguration] { val keyGen = KeyPairGenerator.getInstance("EC") val ecKeyPair = keyGen.generateKeyPair() val encConfig = new ECEncryptionConfiguration(ecKeyPair) encConfig.setAlgorithm(JWEAlgorithm.ECDH_ES_A128KW) encConfig.setMethod(EncryptionMethod.A192CBC_HS384) encConfig }, ZLayer.fromZIO { for { encConfig <- ZIO.service[EncryptionConfiguration] signatureConfig <- ZIO.service[SignatureConfiguration] } yield SecurityConfig(clients = List( { val jwtAuthenticator = new JwtAuthenticator() jwtAuthenticator.addEncryptionConfiguration(encConfig) jwtAuthenticator.setSignatureConfiguration(signatureConfig) /* ... */ }, /* ... */ )) } ) } yield () } ``` And edit our token-issuing endpoint to generate encrypted tokens: ```scala // ... val userRoutes = Routes( Method.GET / "jwt" -> handler { for { profile <- ZIO.service[UserProfile] encConfig <- ZIO.service[EncryptionConfiguration] signatureConfig <- ZIO.service[SignatureConfiguration] jwtGenerator = { val jwtGenerator = new JwtGenerator(signatureConfig, encConfig) /* ... */ jwtGenerator } token = jwtGenerator.generate(profile) } yield Response.ok.copy(body = Body.fromCharSequence(token)) } @@[EncryptionConfiguration with SignatureConfiguration] Pac4jMiddleware .securityFilter(clients = Some(List("DirectBasicAuthClient"))) ) ``` Now try to issue a token and then paste it into [decoder][7]. You won't be able to decode it anymore, neither anyone else. The cost is a larger token size, but it depends on encryption methods and algorithms. Just to compare, here is how it'll look like with the configuration we used: ```text eyJlcGsiOnsia3R5IjoiRUMiLCJjcnYiOiJQLTM4NCIsIngiOiI4TVh6a21TQ1RMU2J3VTdRQ2c2NmM1VWRIZ1VPR21fS2g4Q3lRVVZIOEgzR1dCeUNvaHJ2WUVLRzhOV1hOUUlSIiwieSI6IkFCQkNTQi14UXRxSlVjUU5rVUI3QzNqeHl4TzFQbW5GdEhjb2dIanFFMDZIVHBTTkQ4YWVkZWlJR3dPS01pVVMifSwiY3R5IjoiSldUIiwiZW5jIjoiQTE5MkNCQy1IUzM4NCIsImFsZyI6IkVDREgtRVMrQTEyOEtXIn0.TJAr2-oPN4DJt7MpSHx4jtXmGiaYvFCKZ3lbcdu4Ssad7h8z2utGd26GkZ8KWKGCQ_CwoKeDzgQ.C7SPaeOYitfqARDYvkTwag.tAg_6FNnVgPnrzScrVFWjSnNjiybBwvn5BN5XWLvF9S_L7UrPSQrshb5mZ3s7-0TaCjf15_lDV5tTZwRasa9BHWGlFDqnLrVRUlF1G0xZ29n4G7xTPEBWyYfUDcNPtn9qmWLQKVpylnUlFEKDGzjnbZoJ8m5oZnEfwyz0U083IgWdgQ_ZT9ecSW3p6dbcdPm1uOUiMdG2c9-K-1NvIoUHW_4EyRy0iXdC6vmKl1bPv2k2ZPywR5d-7IoC5SkKmUFXGZi1CC5rdScifvQRM5aYKI1ours0Kxmuc91waFtHHQ1jjx3TUhCydneQKPLuC9v.zWE9JERIo8jhYudjtqc2dlixuvicxeBY ``` ### Using a secret Encryption using a secret is much easier to manage and will look like this: ```scala import org.pac4j.jwt.config.encryption.SecretEncryptionConfiguration import com.nimbusds.jose.EncryptionMethod import com.nimbusds.jose.JWEAlgorithm ZLayer.succeed[EncryptionConfiguration] { new SecretEncryptionConfiguration( // 128-bit secret key "d734b141ab4b62e6f4c4fb0cda5ca435", /* algorithm = */ JWEAlgorithm.DIR, /* encMethod = */ EncryptionMethod.A256GCM ) } ``` You can pass different algorithms and encryption methods, but if you don't know what they mean, it's better to stick to the defaults. Such encryption will produce a token like: ```text eyJjdHkiOiJKV1QiLCJlbmMiOiJBMjU2R0NNIiwiYWxnIjoiZGlyIn0..10VThBBga38rzyzG.ZYXz5xLMoLcySFRinndM-NAoEh93UVd8fQTS-AdPXW7nQsZw55bTkwO58Qx62f2_POSLi70_D3eTs0zrfrrKn436zDB-h6J8JprbnIC2zDjXFoBV6IMgkHMxk243no5fsG28tPdYdobiXWnsk7aOeNeCyBFq1f13HxiCp3E0xzTEa9RXj7vf_jQwTIblVExDDL28VcnopkdY8F4CD9KQHhStXbQxRN80GMdScRkItS-sZIu7QhdlRee-9V_bD201nNWBaazCz5vVd9quj7BWHMoSe48reTtEjYjP9Zeqs3h4Yrc._shfe6akHWBMg4jUud4dkg ``` ## Restricting access by role Imagine we need to restrict users from calling some specific routes which are intended to be used only by admins. We can do it using `pac4j`'s `Authorizer` classes. Let's see how it'll look like, starting from assigning a role to API user: ```scala // ... import org.pac4j.core.context.CallContext import org.pac4j.core.credentials.Credentials import org.pac4j.core.profile.creator.AuthenticatorProfileCreator import org.pac4j.core.profile.creator.ProfileCreator import scala.jdk.OptionConverters._ val directBasicAuthClient = new DirectBasicAuthClient( new SimpleTestUsernamePasswordAuthenticator() ) directBasicAuthClient.setProfileCreator(new ProfileCreator() { override def create( ctx: CallContext, credentials: Credentials ): Optional[UserProfile] = AuthenticatorProfileCreator.INSTANCE .create(ctx, credentials) .toScala .map { profile => if (profile.getUsername == "admin") { profile.addRole("admin") } else { profile.addRole("user") } profile } .toJava }) ``` Here we edit our `directBasicAuthClient`, which protects our `/jwt` endpoint. As you can see, we implemented a new `ProfileCreator` class, which assigns a role depending on the username. In a real application, you would retrieve it from a database or another data source. Worth to say that here we also can add custom payload to retrieve it back from JWT token later in endpoint's body: ```scala if (profile.getUsername == "admin") { profile.addRole("admin") } else { profile.addAttribute("key", "value") profile.addRole("user") } ``` We also need to pass an `Authorizer` to the `SecurityConfig`: ```scala import org.pac4j.core.authorization.authorizer.RequireAllRolesAuthorizer SecurityConfig( clients = /* ... */, authorizers = List("only-admin" -> new RequireAllRolesAuthorizer("admin")) ) ``` The only thing left is to put `only-admin` authorizer onto our `/protected` endpoint: ```scala Method.GET / "protected" -> handler { /* .. */ } @@ Pac4jMiddleware.securityFilter( clients = Some(List("HeaderClient")), authorizers = List("only-admin") ) ``` Now you can check your endpoint to ensure it's restricted from non-admin access. Decoding the JWT token for the `seroperson` user now shows something like this: ```json { "sub": "org.pac4j.core.profile.CommonProfile#seroperson", "$int_roles": ["user"], "exp": 1756853202, "iat": 1756853142, "username": "seroperson" } ``` # Conclusion Here we saw how powerful `pac4j` is and how to use it with `zio-http`. Some code might look a little "Javish", so you'll have to deal with it using some wrappers. Maybe in the future I'll include such wrappers (like `ZioProfileCreator`) in `zio-http-pac4j` to get rid of Java's leftovers. It's worth noting that we saw only a very small part of `pac4j`'s power. The rest is available in the [documentation][5] and by exploring their API. For the complete sources, check [the repository][6]. <!-- prettier-ignore-start --> [1]: https://github.com/seroperson/zio-http-pac4j/ [2]: https://github.com/pac4j/pac4j [3]: https://datatracker.ietf.org/doc/html/rfc7519 [4]: https://www.jwt.io/introduction [5]: https://www.pac4j.org/docs/index.html [6]: https://github.com/seroperson/zio-http-pac4j/tree/main/example/zio-jwt [7]: https://www.jwt.io <!-- prettier-ignore-end --> --- # Article: [Мультиплеер в Цивилизации 5](/ru/2025/09/01/civilization-5-multiplayer-modding/) Некоторое время назад я участвовал в разработке поддержки мультиплеера для одной глобальной модификации Цивилизации 5 и сегодня хотел бы поделиться здесь некоторыми подробностями о том, как устроена сетевая игра, как всё-таки запустить ее с модами, что с ней вообще не так, и как мы это фиксили. <!--more--> <%= toc %> ## Что за Цивилизация 5? [Цивилизация 5][23] - это пошаговая стратегия, выпущенная в 2010 году. Игроку предстоит занять роль правителя цивилизации и развивать ее с древних времен вплоть до современности. Наверное, большинство играли в эту или другую часть этой замечательной игры, так что на введении долго задерживаться не будем. Последний официальный патч был выпущен [27 октября 2014][1]. С тех пор Firaxis переключилась на другие части серии, но даже до сих пор [достаточно много людей играют в 5 часть][2]. Конечно, онлайн намного меньше, чем у последующих частей серии, но всё равно он достаточно большой по сегодняшним меркам, особенно для игры, которая не получает официальных патчей уже более 10 лет. ## Сообщество моддеров Одной из главных причин, почему игра до сих пор пользуется популярностью, является огромное моддерское сообщество. Сегодня более 10к+ модов доступно в [Steam Workshop][3], а [местный моддерский сабфорум][4] насчитывает 500к+ сообщений. Моды поддерживают эту игру свежей и всё еще интересной. На сегодняшний день практически всё в игре может быть изменено модами. Ядро игры, DLL-библиотека со всей логикой, была запаблишена для сообщества самими разработчиками. Традиционными средствами моддинга, т.е. `.lua`, `.xml`, `.sql` и прочими файлами можно изменить всё (почти) остальное. ## Vox Populi (также известен как VP) Как я уже сказал, на сегодняшний день доступно огромное количество модов, но "центральным" и самым популярным является мод **Vox Populi** ([сабфорум на civfanatics][6], [канал в Discord][7]). Он объединяет большое количество разработчиков под "единым" курсом развития и работой над одним гига-модом. [Репозиторий с кодом][5] сегодня насчитывает более 8к коммитов и 330 звездочек, что достаточно много, как для модификации игры. До появления VP все моды и "экосистемы" модов от отдельных моддеров были слабосовместимы между собой, поэтому тяжело было собрать стабильную сборку, в которой ничего бы не конфликтовало друг с другом. С VP такая возможность появилась, так как помимо того, что сами моддеры стали поддерживать совместимость своих модов конкретно с VP, так и сам VP часто "интегрирует" популярные решения в свой состав. Если вдруг захотите попробовать, то есть вот инструкция ["как играть в Цивилизацию 5 + VP"][11], а также вот еще [гайд по установке модов в принципе][12]. Еще есть [отдельный тред, касающийся сетевой игры в VP][22]. Стоит отметить, что не рекомендуется ставить моды из Steam Workshop, так как там они часто устаревшие. Если серьезно хотите заморочиться и попробовать, то для стабильности лучше ставить напрямую `.zip` файлы из веток форума и GitHub. # Что и как я там кодил Я играл в Цивилизацию 5 временами с самого момента релиза, но только с друзьями по сети. Однажды мы решили попробовать мультиплеер с модами (с чистым VP), было это где-то в 2018 году, и оно достаточно сносно работало. В 2023 мы попробовали снова и мультиплеер уже был сломан и буквально неиграбелен. Примерно с тех пор и на протяжении года я контрибьютил в VP, чтобы сделать его совместимым с мультиплеером. ## Совместимость VP с игрой по сети Причина, по которой VP перестал быть совместимым с игрой по сети, была в том, что в сообществе еще не было понимания, что влияет на совместимость и как в целом работает мультиплеер. Поэтому кодили все вообще без оглядки на игру по сети. И тестировали все только в синглплеере (ну, к слову, конкретно этот пункт здесь только потому что протестировать что-то в мультиплеере - это тот еще квест). На мультиплеер официально забили, обосновав решение тем, что никто не в курсе, как его поддерживать, да и вообще якобы неткод не заопенсоршен (спойлер: незаопенсоршена только прям низкоуровневая логика пересылки байтиков, а так всё необходимое для создания мультиплеер-совместимых модов уже было) и поэтому пофиксить ничего не получится. Поэтому моды в сетевой игре работали весьма рандомно: что-то работало, что-то нет, что-то работало, но через раз. Спрос на мультиплеер был, но как его заставить снова работать - понимания не было, поэтому никто и не занимался. Для меня было немножко загадкой, как так получается, - ведь некоторые моды же работают нормально, да и вообще сам VP работал нормально еще в 2018. Значит, с тех пор добавили какие-то функции, которые сломали мультиплеер, и всё что необходимо, это найти их и пофиксить. После множества долгих и отчаянных дэбаг-сессий, ковыряния в такой-себе C++шной кодовой базе, всё-таки получилось понять, как это всё вообще работает и каких правил необходимо придерживаться, чтобы мод был совместим с игрой по сети. ## Архитектура сетевой игры <%= render "alert", type: "warning", message: "На всякий случай отмечу, что я не C++ разработчик и никак не связан с геймдэвом. Все описанное - просто наблюдения фаната серии, который по счастливой случайности разбирается в компьютерах." %> Вообще говоря, архитектура сетевой игры в Цивилизации 5 достаточно специфична и поэтому неудивительно, что даже в ванильной версии без модов постоянно воспроизводятся какие-то баги, "рассинхроны", особенно в лэйтгейме или в сессии с большим количеством игроков. Если представить себе на секунду, как в вакууме работала бы сетевая игра какой-то пошаговой стратегии, да и в принципе любой игры, то, наверное, сразу приходит на ум схема, в которой есть: - Хост и "владелец" состояния текущей сессии, с которым синкают свое состояние все остальные игроки. Либо выделенный сервер вместо хоста. - Участники сессии, которые пуллят состояние с хоста и пушат свои действия туда же, чтобы это состояние обновить. Полагаю, это "традиционная" модель, которая работает в большинстве игр, но в Цивилизации 5 модель совершенно другая. В Цивилизации 5 как такового "эталонного" состояния у игроков в сессии формально нет. Нет ни "владельца" состояния текущей сессии, нет и выделенного сервера, на котором ранится вся логика и с которого пуллили бы состояние все участники сессии. Тут вообще никто ничего ниоткуда не пуллит, есть только пуш. Когда начинается игровая сессия, каждый из участников вычисляет всю логику самостоятельно и бродкастит только непосредственно действия игрока. Например, "походить юнитом А в точку Б" - это действие будет запаблишено, все участники сессии получат этот бродкаст и изменят свое состояние, чтобы юнит А на их экране стоял в точке Б. Если, например, в точке Б непроходимый ландшафт и походить туда невозможно, то конечную точку, в которой окажется юнит А, каждый участник сессии вычисляет самостоятельно. В идеале результат вычисления этой конечной точки всегда должен быть одинаковый для всех участников сессии, ведь все же играют в одну и ту же версию игры и ранят один и тот же код. Если в сессии есть боты, то их действия тоже никуда не бродкастятся (потому что бродкастятся только действия реальных игроков). Их логика работы также вычисляется каждым участником сессии самостоятельно: например, бот решает "строить в городе казармы" или "строить в городе библиотеку" на основе какого-то алгоритма, который работает одинаково у каждого участника сессии, и по итогу везде бот выполняет одинаковое действие. <%= picture 'images/2023-11-10-civilization/civ5-mp.webp' %> _В английской версии статьи я когда-то рисовал такую схему, чтобы показать, как бродкастятся действия между игроками и как это влияет на их итоговое состояние_ Это все в теории работает супер-идеально и работает до тех пор, пока у всех игроков одинаковое состояние. В теории состояние у всех действительно одинаковое, потому что все бродкастят свои действия, у всех они одинаково обрабатываются, вся логика у каждого игрока работает одинаково и возвращает одинаковые результаты, и по итогу все со всеми синхронизированы. В такой схеме не должно существовать никакого (реального) рандома, потому что как только появляется рандом в логике, алгоритмы на компьютере у каждого игрока начинают возвращать разные результаты и состояние игры перестает быть одинаковыми. Например, если представить, что боты выбирали бы "строить казармы" или "строить библиотеку" на основе рандома, а не на основе четкого алгоритма, то у игрока А бот строил бы казармы, а у игрока Б он строил бы библиотеку. Действия ботов не бродкастятся между участниками сессии и состояние игры никогда никем ниоткуда не пуллится, "эталонного" состояния нет и вовсе, а потому наличие рандома в такой логике привело бы к тому, что игрок А и игрок Б играли бы как бы разную игру. У одного игрока на экране бот был бы с казармами, у другого - с библиотекой. И дальнейшие действия бота тоже разнились бы на каждом экране и со временем эта разница накапливалась бы как снежный ком. Когда-нибудь такая сессия непременно приведет к тому, что игра крашнется. <%= render "alert", type: "info", message: "На самом деле в игре в некотором виде реализован пулл \"эталонного\" состояния, но вызывается он только в том случае, если игра уже однозначно поняла, что она в полном рассинхроне. Синхронизация между всеми игроками сессии на состояние игры хоста происходит в момент, когда алгоритм обнаруживает, что какая-то сущность отсутствует у одного игрока, но присутствует у другого. Например, это юнит или город. Если повезет и игра это обнаружит раньше, чем крашнется, то начнется специальный процесс синхронизации и все игроки начнут следующий ход в том состоянии, которое было просчитано у хоста." %> Но вернемся к идеальному миру, где рандома в логике игры нет, состояние сессии вычисляется всегда одинаково у каждого участника, и по итогу все прекрасно работает. Это и есть "главное правило" стабильной сетевой игры в Цивилизации 5. И еще одно правило, что каждое действие человека нужно бродкастить на всех остальных, но это, в принципе, достаточно очевидное правило. Как вы уже, наверное, догадались, эти правила и нарушались моддерами (и нарушались самими разработчиками в том числе), что приводило к разным состояниям игры у участников сессии, а в последствии и к вылетам. И, как выяснилось, не нарушать их - это прям вот головная боль. ### Примеры кода, который делал сетевую игру невозможной Как я уже говорил, долгое время VP разрабатывали без оглядки на работоспособность в сетевой игре. В одиночной игре все работало классно, но в сетевой не работало вообще. После вышеописанного можно понять почему - потому что рандом в логике на стабильность одиночной игры не влияет. Ну, будет бот что-то вычислять случайным образом, да и фиг с ним. И бродкастить никому ничего не нужно - новые фичи, в которых появлялись какие-то новые действия пользователя, просто кодили и все, забыв (или забив) на бродкаст в случае сетевой игры. Но представим, что для действий пользователя реализовали корректный бродкаст и его обработку, и что никто не использовал в коде условный `math.random()` (потому что всё-таки это "плохой тон" и весь рандом там обычно условный, основанный на сиде сессии), и вроде как от всего видимого рандома мы избавились, но сетевая игра продолжает постоянно вылетать. В чем же быть проблема? Как выяснилось, проблема все в том же - один и тот же код на разных устройствах возвращает разные результаты. Примеры фиксов: - [#9768][13]: использование `sort`, который в C++ не гарантирует воспроизводимый порядок для одинаковых элементов. - [#10112][14]: использование `set` с произвольной сортировкой. - [#9867][15]: использование указателей в качестве ключей в `map` и произвольный порядок как следствие. - [#10250][16]: использование одного и того же кэша для UI и для логики ядра. Кэш считается один раз в ход и если он уже посчитан на этом ходу, то он не пересчитывается снова. Поэтому если кэш инициализировался по клику пользователем на кнопку, то значение кэша в момент работы ядра будет устаревшим и обновлено не будет. Поэтому рассинхронится ли состояние у двух игроков, зависело от того, нажмет ли кто-то из них на какую-то плохо-реализованную кнопку. - [#9767][17], [#9970][18]: использование в процессе вычисления логики переменных, которые принимают разное значение в зависимости от игрока, который производит эти вычисления. Больше примеров таких багов можно найти [здесь][19]. Иногда попадаются очень экзотические кейсы, которые прям очень тяжело отследить. Обычная дэбаг-сессия работы мода в мультиплеере - это дэбаг логами и сравнение этих логов с двух одновременно-запущенных инстансов игры, и она занимает прямо много времени. Пока всё соберешь, пока везде подложишь, пока запустишь, пока создашь сессию, пока воспроизведешь, пока сравнишь логи - дэбаг даже простого рассинхрона занимает часы, если повезет, не говоря уже про какие-то более душные сценарии. Суть проблемы всегда одна - каким-то образом логика отработала на разных "концах" по разному. ### Что можно было бы сделать изначально иначе? У такой архитектуры сетевой игры есть свои плюсы. Но перевешивают ли эти плюсы сложность в поддержке такого решения? Я бы сказал, что, конечно, нет. Прям очень тяжело вести такую разработку и держать постоянно в уме все эти нюансы. Неудивительно, что, как я уже говорил, даже в ванильной Цивилизации 5 без всяких модов в сетевой игре были постоянные рассинхроны. Если бы кто-то выступал в роли сервера и основная логика выполнялась бы на нем, а не у каждого игрока отдельно, вероятно, мультиплеер был бы стабильней. В теории это всё можно запилить, вообще полностью переписать архитектуру сетевой игры, чтобы она была навсегда стабильна. Но надо быть большим энтузиастом, чтобы тратить на это столько времени. Дела еще изначально были бы получше, если бы Firaxis вместе с DLL-ем прикрепили бы и тесты (если они вообще существовали, хотя в такой кодовой базе они точно должны были существовать). Сегодня кодовая база VP (т.е. изначально вся логика Цивилизации 5) на сотни тысяч строк не покрыта ни единым тестом, поэтому сломать там что-то намного проще, чем починить. Спасибо, что правки вручную тестируют хотя бы в одиночной игре. А тестировать правки вручную в мультиплеере - это уже извращение, на которое далеко не каждый готов идти, потому это уже не просто "скомпилировать и запустить". ## Реверс-инжениринг закрытого кода Как я уже говорил, Firaxis выложила в открытый доступ Windows DLL с логикой всей игры, с логикой ботов, со всеми алгоритмами принятия решений и т.д. Изменяя этот DLL можно изменить практически все. Закрытой частью, которая реализована вне DLL, остается рендеринг, UI, взаимодействие с камерой, пересылка байтиков по сети, взаимодействие со Steam, с локальной базой и еще всякие мелочи, которые обычно и не нужно править. Но в этой закрытой части есть и баги, которые, получается, пофиксить никак нельзя. Конечно, грех жаловаться - хотя бы что-то отдали сообществу, уже хорошо. То, что моддеры могут модифицировать только DLL, а не весь код полностью, накладывает определенные ограничения: - Разработка залочена на Visual Studio 2008 и поэтому нет возможности использовать какие-то более современные фичи C++. - Все скомпилированные библиотеки и исполняемые файлы 32-битные, что ограничивает доступное количество памяти, которую игра может использовать. С большим количеством игроков и со всей обновленной логикой из VP, этой памяти иногда перестает хватать. - Баги и утечки памяти в закрытом коде невозможно пофиксить. - Невозможно моддить рендеринг и всё ранее-перечисленное. Более того, исполняемые файлы защищены CEG (DRM-защита от Steam, которая, хоть и устаревшая, но всё равно рабочая), поэтому пропатчить закрытый код и распространять его вместе с модом невозможно. Но есть и хорошие новости: - Хоть и исполняемые файлы защищены CEG, их всё равно можно пропатчить в рантайме. Это достаточно легко реализовать, учитывая, что мы уже можем править код в подгружаемой DLL. - Исполняемые файлы не обфусцированы какой-то супер-модной технологией, как это делают сейчас. - Ну и самое приятное: непонятно, было ли это сделано намеренно или нет, но в Linux-версии исполняемый файл поставлялся со всеми сохраненными нэймингами и виртуальными таблицами. Учитывая, что он практически идентичен Windows-версии, можно их сопоставить и найти необходимую функцию по нэймингу из Linux-версии в Windows-версии. <%= picture 'images/2023-11-10-civilization/ghidra-civ5-mp.webp' %> _Как-то так выглядит исполняемый файл под Linux_ Таким образом я реализовал, например, [кнопку, которая запускает для всех участников сессии процесс синхронизации][20]. Как я уже говорил ранее, он запускается автоматически только в том случае, если игра сама заметила, что она в рассинхроне, но обычно до этого доходит редко и она раньше крашится, чем понимает это. Такая кнопка часто помогает доиграть игру, которая в другой раз константно начала бы в какой-то момент крашиться. Да, было бы лучше исправить все баги с рассинхронами, но чтобы это реализовать, нужно фулл-тайм посвящать себя работе над этим модом, чего, конечно, никто делать не будет. Эта правка служит примером, что ничего невозможного в моддинге Цивилизации 5 нет. При желании достучаться можно и до закрытого кода. # Заключение В 2023 мы пофиксили достаточно много всего и мультиплеер был, наверное, самым стабильным за все время существования VP. С тех пор как я покинул разработку, насколько знаю, дела с мультиплеером снова чуть ухудшились, но, тем не менее, они всё равно даже теперь явно лучше, чем были когда-то. Такая вот история. Если вдруг захотите попробовать вспомнить 2010 и поиграть в Цивилизацию 5 с модами, уверяю, это будет того стоить. Ну, синглплеер уж точно. Long live VP! И подписывайтесь на мой <a href="<%= site.metadata.author.tg_channel_link %>">🛫 Telegram-канал</a>, кстати. Там я размещаю всякие мысли про повседневную разработку, которые не подходят под формат блога. <!-- prettier-ignore-start --> [1]: https://store.steampowered.com/news/app/8930/view/2912096327579037004 [2]: https://steamcharts.com/app/8930 [3]: https://steamcommunity.com/workshop/browse/?appid=8930 [4]: https://forums.civfanatics.com/categories/civilization-v.385/ [5]: https://github.com/LoneGazebo/Community-Patch-DLL [6]: https://forums.civfanatics.com/forums/community-patch-project.497/ [7]: https://discord.com/invite/KbgmCRU [8]: https://forums.civfanatics.com/resources/more-unique-components-for-vox-populi.26966/ [9]: https://forums.civfanatics.com/threads/poll-more-wonders-for-vp.653498/ [10]: https://forums.civfanatics.com/threads/even-more-resources-for-vox-populi.654431/ [11]: https://github.com/LoneGazebo/Community-Patch-DLL/#how-can-i-play-this [12]: https://jfdmodding.fandom.com/wiki/Civ_V_Mod_Installation [13]: https://github.com/LoneGazebo/Community-Patch-DLL/issues/9768#issuecomment-1521206665 [14]: https://github.com/LoneGazebo/Community-Patch-DLL/pull/10112 [15]: https://github.com/LoneGazebo/Community-Patch-DLL/pull/9867 [16]: https://github.com/LoneGazebo/Community-Patch-DLL/pull/10250 [17]: https://github.com/LoneGazebo/Community-Patch-DLL/pull/9767 [18]: https://github.com/LoneGazebo/Community-Patch-DLL/pull/9970 [19]: https://github.com/LoneGazebo/Community-Patch-DLL/pulls?q=is%3Apr%20is%3Aclosed%20author%3Aseroperson [20]: https://github.com/LoneGazebo/Community-Patch-DLL/pull/10281 [21]: https://github.com/Lymia/MPPatch [22]: https://forums.civfanatics.com/threads/voxpopuli-modpacks-update-4-22.685164/ [23]: https://en.wikipedia.org/wiki/Civilization_V <!-- prettier-ignore-end --> --- # Article: [Choosing a self-hosted web analytics solution](/2025/08/12/choosing-a-self-hosted-web-analytics/) Recently I decided to finally get rid of Google Analytics in favor of a self-hosted solution. That's how it went, a list of available options to migrate and why you need it too. <!--more--> ## Why I decided to migrate Initially this website and all my other projects were undoubtedly configured to use GA. And everything went well, actually, until I started to care about such things as GDPR, CCPA, PECR and so on. My website has no complaints, but it just disturbed me without an actual reason. Making GA GDPR-compatible is a pain in the ass and you would eventually end up with that weird "Accept all cookies" banner. Moreover, you must then implement this logic for each of your projects, which is literally unacceptable. Besides this, it's worth noting the following arguments: - The last time I checked GA's JavaScript file was 377kb, which is usually larger than some actual projects nowadays. - GA's JS is blocked by default by most adblockers, which makes your analytics non-representative more or less depending on niche. And our niche of software development is probably the sweetest spot for adblock users. - And, finally, regarding GA's UI / UX: it's just bad. But besides all of these cons, I believe there are good use-cases for GA, so if it suits you well, then this article isn't about you. Personally I think that if you're going further with your blog, you eventually should migrate. ## What options do we have Next, let's see where we can migrate. We will consider only self-hosted and GDPR-compatible options because in my opinion there is no truly free and good managed web analytics solution. - [milesmcc/shynet][1] - written in Python, lightweight web analytics, which was targeted to be self-hosted from the very beginning. It requires really nothing to start, can be used with SQLite or PostgreSQL as a database. However, it's probably too lightweight as it's missing some essential features like [custom events][2] and [utm tracking][3]. - [piratepx/app][4] - probably the most lightweight web analytics you have ever seen, written in JavaScript. Counts only visits and nothing more. Requires PostgreSQL to run. It also has a [live demo][5]. - [matomo-org/matomo][6] - looks like the most mature and feature-complete self-hosted solution written in PHP. It requires MySQL to run and nothing else. Honestly, this probably would be the best solution to choose, but PHP + MySQL + their UI scares me away. - [plausible/analytics][7] - another mature solution with nice UI / UX and enough features, written in Elixir. It requires the resource-intensive ClickHouse + PostgreSQL bundle, which is huge as for self-hosting. - [rybbit-io/rybbit][8] - and the last, pretty good-looking major competitor to everything above. Written in JavaScript, also requires ClickHouse + PostgreSQL, and probably has more features than Plausible. And it's also worth mentioning such solutions as: [posthog/posthog][9], [arp242/goatcounter][10] and [explodingcamera/liwan][11]. Many more exist on the market, but these are the most noticeable. After a long testing and researching, I was struggling between Plausible and Rybbit as a result. Subjectively, I liked Plausible more. Rybbit is also great, and maybe even better overall, but there is something inside me that doesn't like it 🤡 So, my personal subjective top is: **Plausible**, **Rybbit**, **Matomo**. ## Conclusion So, is it worth it or not? To start with what I can definitely say is that adblock is a real deal among my audience, as GA vs Self-Hosted Plausible unique visitors counters differ by **x4-x5** times 🤡 That's a really huge difference: for example, GA shows me 10 visitors, while Plausible shows 50. And finally, I'm calm that everyone's privacy is respected. And I'm glad that it was accomplished without implementing weird "Accept all cookies" banners. Yes, hosting and managing ClickHouse hurts, as it requires at least 6GB RAM to run, which is usually too much for self-hosting. But you should just accept it and then it feels better. On the other hand, now I have it running and I can try some things which I refused to configure due to my reluctance to install ClickHouse. So, summing everything up, I think it's worth it. Do it once and be at peace. <!-- prettier-ignore-start --> [1]: https://github.com/milesmcc/shynet [2]: https://github.com/milesmcc/shynet/issues/42 [3]: https://github.com/milesmcc/shynet/issues/40 [4]: https://github.com/piratepx/app [5]: https://app.piratepx.com/shared/bGQbUJ-YADC_xIGZaYmyqp-J_PD6O1pkCdHmYdIjUvs53ExsImlzFeou4MCuZRbH [6]: https://github.com/matomo-org/matomo [7]: https://github.com/plausible/analytics [8]: https://github.com/rybbit-io/rybbit [9]: https://github.com/PostHog/posthog [10]: https://github.com/arp242/goatcounter [11]: https://github.com/explodingcamera/liwan <!-- prettier-ignore-end --> --- # Article: [Simple Reminder Telegram MiniApp](/2025/06/19/simple-reminder-telegram-miniapp/) I'm glad to present you my another pet-project and, as before (**[📫 Link saver bot for Telegram][1]**), check the internals, discuss the used technology stack and other things. <!--more--> ## What is it? <%= picture 'images/2025-06-14-reminder/preview.webp' %> _Preview images for Telegram's AppStore analogue_ **[🛫 It's a Telegram MiniApp][1]** which is a simple TODO list, actually, but with some comfortable features: - Synchronized across devices out-of-box and works wherever you have Telegram. - Sends notifications directly to Telegram, so you'll probably never miss them. - Lets you assign and manage categories. It's of course not a revolutionary idea, but besides of practicing another project launch, I found it really usable for myself. Also, I've created a simple landing page for this project to catch some traffic from search engines: **[🌐 reminder.seroperson.me][3]**. Later probably I'll implement a standalone web version there, but it's still under considering. See technology stack at **[🚧 Projects][4]** page. <!-- prettier-ignore-start --> [1]: /2023/09/08/link-saver-bot-for-telegram/ [2]: https://t.me/sp_adv_reminder_bot?startapp [3]: https://reminder.seroperson.me/ [4]: /projects <!-- prettier-ignore-end --> --- # Article: [Previewing nix-managed dotfiles](/2025/05/26/previewing-nix-managed-dotfiles/) Some time ago I wrote an article **[⚙️ Managing dotfiles with Nix][1]**. Since then not many things have changed in my repository, but recently I managed to implement a feature, which I've been aware of for quite a long time: previewing dotfiles. <!--more--> ## How does it look like? Better to see something once than to hear about it a thousand times, so [here is how it works for my dotfiles][2]: ![Preview](/images/2025-05-26-nix/docker.gif) There are numerous ways to implement it, but I believe the most correct ones are: - Run directly via `nix develop` in temporary `HOME` directory. - Build docker container using `nix`, make it take care of dependencies and stuff, then run. ## Implementing previewing via nix-shell (nix develop) <%= render "alert", type: "warning", message: "Here I describe a case when you manage your configuration with a standalone home-manager installation, not as a part of NixOS module. It's still possible to implement for NixOS case too, but probably minor tweaks are required." %> I assume you have configured `flake.nix` and `home.nix` files and you can initialize your configuration using a command like: ``` home-manager init --switch $HOME/.dotfiles/ --flake $HOME/.dotfiles/ ``` Now let's see how to make your configuration `nix develop`-friendly. First, you have to make your `home.nix` parameterized by two arguments, `homeDirectory` and `username`. We need this to be able to initialize our dotfiles in a non-hardcoded home directory. We can do it like this, `flake.nix`: ```nix { # ... outputs = { self, nixpkgs, home-manager, ... }@inputs: let myHomeManagerConfiguration = { useSymlinks, homeDirectory, username, dotfilesDirectory }@extraSpecialArgs: home-manager.lib.homeManagerConfiguration { inherit pkgs extraSpecialArgs; modules = [ ./home.nix ]; }; in { # ... homeConfigurations = { "seroperson" = myHomeManagerConfiguration { useSymlinks = true; homeDirectory = "/home/seroperson/"; username = "seroperson"; dotfilesDirectory = "/home/seroperson/.dotfiles"; }; }; }; } ``` And `home.nix`: ```nix { config, pkgs, useSymlinks, homeDirectory, username, dotfilesDirectory, ... }: let inherit (import ./nix/utils.nix { inherit config useSymlinks dotfilesDirectory; }) fileReference; in { home = { inherit homeDirectory username; stateVersion = "24.05"; }; xdg.mime.enable = false; xdg.configFile = { "git" = { source = fileReference ./git; recursive = true; }; # ... }; # https://github.com/nix-community/home-manager/issues/2995 programs.man.enable = false; # ... } ``` And also we need to define function `fileReference` in `nix/utils.nix`, which returns relative path in `dotfilesDirectory` or path in `/nix/store/` depending on `useSymlinks` value. It allows us to build both immutable and mutable (via symlinking) dotfiles. ```nix { dotfilesDirectory, config, useSymlinks }: rec { baseName = p: let removeNixStorePrefix = nsp: let m = builtins.match "/nix/store/[^-]+-(.*)" ( toString nsp ); in if m == null then nsp else (builtins.head m); in builtins.baseNameOf (removeNixStorePrefix p); fileReference = path: if useSymlinks then config.lib.file.mkOutOfStoreSymlink "${dotfilesDirectory}/${baseName path}" else path; } ``` Be careful with this implementation, as it doesn't support nested directories. For example: ```nix { xdg.configFile = { # Works "git" = { source = fileReference ./git; recursive = true; }; # Works "zsh" = { source = fileReference ./zsh; recursive = true; }; # ... }; # Works home.file.".zshenv" = { source = fileReference ./.zshenv; }; # Doesn't work home.file.".zshenv" = { source = fileReference ./zsh/.zshenv; }; } ``` So everything you're going to reference must be in the git root directory. Okay, now re-initialize your configuration and make sure everything works. After you have done this, our next step is to define our shell output in `flake.nix`: ```nix { # ... outputs = { self, nixpkgs, nixpkgs-unstable, home-manager, ... }: let myHomeManagerConfiguration = { useSymlinks, homeDirectory, username, dotfilesDirectory }@extraSpecialArgs: home-manager.lib.homeManagerConfiguration { # ... }; in { devShells.${system}.default = pkgs.mkShell rec { homeDirectory = builtins.getEnv "HOME"; username = builtins.getEnv "USER"; activationPackage = (myHomeManagerConfiguration { inherit homeDirectory username; useSymlinks = false; dotfilesDirectory = ""; }).activationPackage; buildInputs = [ activationPackage pkgs.nix ]; shellHook = '' export HOME=${homeDirectory} export USER=${username} # Fixes `Could not find suitable profile directory` error mkdir -p ${homeDirectory}/.local/state/nix/profiles ${activationPackage}/activate # Run zsh and then exit IS_PREVIEW=1 exec $HOME/.nix-profile/bin/zsh ''; }; # ... }; } ``` Here we define a development shell, which has our dotfiles as a dependency, initializes it using `HOME` and `USER` environment variables, and calls home-manager's `/activate` script in `shellHook` to do symlinking. I have also used the `IS_PREVIEW` variable to indicate that we're in a simulated environment to be able to disable some unnecessary functionality (like `ssh-agent` initialization). Now go and check the result: ```sh mkdir -p /tmp/test USER=seroperson-preview HOME=/tmp/test nix develop --impure github:seroperson/dotfiles ``` <%= render "alert", type: "info", message: "`--impure` flag is important as we're accessing environment variables." %> ## Building your dotfiles into a container It could be enough to preview everything with `nix develop`, but I haven't been calm with the idea of bundling my environment into a Docker container. Moreover, I believe such an approach has more use-cases besides previewing dotfiles, and there is really not much information on this topic. Of course, we can bundle everything manually using plain old `Dockerfile`, `git clone` your repository and so on, but that's not a nix way. We want to build image using nix to leverage all its' pros. It's not as easy to implement correctly, as you need a totally working `nix` environment inside your container, and it's not enough to just add your dotfiles as a dependency, like we did with `nix develop`, and do `dockerTools.buildImage`. If you dig into this topic enough, you'll probably find [an issue][3], which helped me to implement it correctly. To be more precise, [this comment][4] by **cameronraysmith**, author of [nixpod][5], pointed me in the right direction. In short, you need to implement a wrapper around `dockerTools.buildImage` with complete `nix` environment initialization, which you can mostly take from the [nix repository][6]. I edited it just a little, but the listing is too long to paste here, so you can look at it [in my repository][7]. Assuming we placed this file in `nix/docker.nix`, here is how we implement `docker` output: ```nix { # ... outputs = { self, nixpkgs, nixpkgs-unstable, home-manager, ... }: let myHomeManagerConfiguration = { useSymlinks, homeDirectory, username, dotfilesDirectory }@extraSpecialArgs: home-manager.lib.homeManagerConfiguration { # ... }; in { packages.${system}.docker = (import ./nix/docker.nix { pkgs = pkgs; name = "seroperson.me/dotfiles"; tag = "latest"; extraPkgs = with pkgs; [ ps gnused coreutils ]; extraContents = [ (myHomeManagerConfiguration { useSymlinks = false; homeDirectory = "/root"; username = "root"; dotfilesDirectory = ""; }).activationPackage ]; extraEnv = [ "IS_PREVIEW=1" ]; rootShell = "/root/.nix-profile/bin/zsh"; cmd = [ "/bin/sh" "-c" "/activate && exec /root/.nix-profile/bin/zsh" ]; nixConf = { allowed-users = [ "*" ]; experimental-features = [ "nix-command" "flakes" ]; max-jobs = [ "auto" ]; sandbox = "false"; trusted-users = [ "root" ]; }; }); # ... }; } ``` What we do here is initialize our dotfiles for the `root` user, copy all dotfiles content to the container, and then run `/activate` and our `zsh` shell. Now you can try to build this and run: ```sh nix build .#docker docker load < ./result docker run --rm -it seroperson.me/dotfiles ``` The result will be probably a gif which you've seen above. ## Conclusion That's it. Again, reference to [my repository][8] to see how everything works. <!-- prettier-ignore-start --> [1]: /2024/01/16/managing-dotfiles-with-nix [2]: https://github.com/seroperson/dotfiles?tab=readme-ov-file#-preview-without-installation [3]: https://github.com/nix-community/home-manager/issues/5258 [4]: https://github.com/nix-community/home-manager/issues/5258#issuecomment-2235325419 [5]: https://github.com/cameronraysmith/nixpod [6]: https://github.com/NixOS/nix/blob/master/docker.nix [7]: https://github.com/seroperson/dotfiles/blob/master/nix/docker.nix [8]: https://github.com/seroperson/dotfiles <!-- prettier-ignore-end --> --- # Article: [Preparing a project to be vibe-coded](/2025/05/02/preparing-a-project-to-be-vibe-coded/) If you are an IT specialist, you have probably heard about tools like [Cursor][15], [Windsurf][16], [Cline][17], and [Roo Code][18]. These tools can create entire projects for you even if you don't know programming. You just need to download an IDE (or VS Code with some extensions), give it instructions, and wait for the result. If you're lucky, it will even work. I have spent some time using these AI tools, and today I want to share my experience about how to prepare a project for "vibe-coding". I used Cursor the most, but it doesn't matter much because all these tools use the same AI models. The main difference is usually the UI. What I'm going to talk about applies to all these kinds of tools. <!--more--> <%= toc %> As I said, we'll discuss how to prepare a project for AI coding. I believe you always need a "preparation stage" for both existing and new projects. Of course, you can just let AI work on your project without guidance, but this often breaks things. AI doesn't know your preferences, your project and doesn't know what you want. It can understand some requirements from your code, but this doesn't always work well. Sometimes it also can make unexpected changes, like "Hey, I also migrated your project from Vue to React." That's why you need to prepare your project for AI: to make it more predictable and make it do what you need without unwanted changes. Preparation means documenting your preferences, requirements, agreements, tech stack, and so on. It's like `CONTRIBUTING.md` and `README.md`, but for AI. You could just fill in those `.md` files and things would be better, but it's better to separate human and AI documentation. ## Using .cursorrules file The simplest way to store your AI documentation is using a `.cursorrules` file. [This file can contain any instructions in free format][2], and it will be loaded by default in every new chat. For example, you can write something like this and Cursor will remember it: ```text - Use only Tailwind classes and minimize the amount of custom CSS. ``` This approach works fine, but requires you to manually manage the `.cursorrules` file, and also as for now it's deprecated in favor of the newer approach. ## Using .cursor/rules/ directory Since recently, [you can split your rules][3] into files like `architecture.mdc`, `stack.mdc`, `styling.mdc` and place them in the `.cursor/rules/` directory. This helps keep your rules organized. Look at a good example of such rules: [eyaltoledano/claude-task-master][19]. You can also use an instruction like: ```text - Read your context from all `*.md` files from `context/` folder. ``` This lets you store your rules in the `context/` directory. This approach works well if you use not only Cursor as your AI assistant. ## Make your memory self-manageable Instead of manually editing your rules, you can ask Cursor to manage them for you. A basic instruction for `.cursorrules` is: ```text - Read context from all files from `context/` folder. - Whenever you learn new knowledge, requirements or preferences about the project, summarize it and put it into the `context/` folder. Choose a suitable existing `.md` file or create a new one. ``` This works, but you must review carefully what it writes. Often it adds unnecessary information, so don't let your `context/` become a mess of AI-generated shit. <%== details "That's what happens when you let your AI run in YOLO mode." do %> ```text docs/ ├── ai-assistant-integration.md ├── architecture-plan.md ├── build-with-bun.md ├── bun-migration.md ├── cline-integration.md ├── code-improvements.md ├── cursor-integration.md ├── custom-folder-name.md ├── debug-mcp-config.md ├── documentation-structure.md ├── file-naming-convention-update.md ├── file-naming-convention.md ├── implementation-plan-rule-formats.md ├── integration-testing-guide.md ├── logging-system.md ├── mcp-protocol-specification.md ├── memory-bank-bug-fixes.md ├── memory-bank-mcp-startup.md ├── memory-bank-path-changes.md ├── memory-bank-status-fix.md ├── memory-bank-status-prefix.md ├── migration-guide.md ├── modular-architecture-proposal.md ├── npx-usage.md ├── README.md ├── requirements.md ├── roo-code-integration.md ├── rule-examples.md ├── rule-formats.md ├── test-coverage.md ├── testing-clinerules.md ├── testing-guide.md ├── testing-strategy.md ├── testing.md └── usage-modes.md ``` Source: [some project on GitHub](https://github.com/movibe/memory-bank-mcp/tree/7851e066a8cdfa216c7411ea5f70cce378bb1271/docs) <% end %> ## "Memory bank" prompts Another way to solve this problem is using "memory bank" prompts. These are instructions that filter unnecessary information, organize it between existing `*.md` files, and help keep your context clean. The idea came from [Cline memory bank prompt][11] and then spread around. Now you can find many different variations: [cursor-memory-bank][4], [cursor-bank][10], [rules_template][12], [memory-bank-mcp][5] [memory-bank-mcp (another)][6], [roocode-memorybank-optimized][7], [roo-code-memory-bank][9], [RooFlow][8] and others. You can try these yourself, but I didn't find them very useful. Things still don't "just work" and you often need to manage something manually. But manual management becomes even harder because you don't fully understand the prompts. When something doesn't work as expected, you have to debug and edit these prompts, which takes extra time. And this happens quite often because these approaches aren't much solid and not enough battle-tested. ## Graphiti During my attempts to improve my workflow, I found another promising solution: [Graphiti][13]. It's a kind of AI-powered database that stores knowledge about whatever you put into it. It wasn't originally meant to be used as memory for AI assistant, but recently they published an article called ["Cursor IDE: Adding Memory With Graphiti MCP"][1]. I checked this out carefully and right now it's not much usable: - It requires you to run a separate service and database on your computer. - The MCP integration doesn't work well. Usually, the AI ignores it unless you directly ask it "to store memory." - Managing memory manually is difficult. The best way to remove something from memory is probably to remove it directly from the database. - It requires API keys for OpenAI. Other providers are available too, but only if you code them yourself. - The installation is complicated and the documentation is incomplete (the MCP part has no documentation at all). If you need to do something, you'll have to look through the code. They published [this MCP][14] about a month ago, so things might not be polished yet. They might improve in the future. ## Conclusion As you can see, there are many approaches to try. Which one should you use? After everything I've tried, I think the best choice is using the `.cursor/rules/` directory with manually created rules (or AI-generated rules that you carefully review). In my opinion, everything else is a waste of time as for now, but things might change in the future. There's still room for a good memory management tool, which might appear soon given how fast the AI market is changing 🙂 <!-- prettier-ignore-start --> [1]: https://blog.getzep.com/cursor-adding-memory-with-graphiti-mcp/ [2]: https://docs.cursor.com/context/rules#cursorrules-legacy [3]: https://docs.cursor.com/context/rules#project-rules [4]: https://github.com/vanzan01/cursor-memory-bank [5]: https://github.com/alioshr/memory-bank-mcp [6]: https://github.com/movibe/memory-bank-mcp [7]: https://github.com/shipdocs/roocode-memorybank-optimized [8]: https://github.com/GreatScottyMac/RooFlow [9]: https://github.com/GreatScottyMac/roo-code-memory-bank [10]: https://github.com/tacticlaunch/cursor-bank [11]: https://docs.cline.bot/improving-your-prompting-skills/cline-memory-bank [12]: https://github.com/Bhartendu-Kumar/rules_template [13]: https://help.getzep.com/graphiti/graphiti/overview [14]: https://github.com/getzep/graphiti/tree/main/mcp_server [15]: https://www.cursor.com [16]: https://windsurf.com [17]: https://cline.bot [18]: https://roocode.com [19]: https://github.com/eyaltoledano/claude-task-master/tree/main/.cursor/rules <!-- prettier-ignore-end --> --- # Article: [Enhanced Telegram's callback_data with protobuf + base85](/2025/02/05/enhanced-telegram-callback-data/) If you've ever developed a Telegram Bot, you probably know what [callback_data][1] is. If not, in short, it's an arbitrary string that you can use in your backend to understand which button was pressed. As your bot grows, your `callback_data` can become messy. This is something I have experienced. Today, I want to share a new way to handle this problem. <!--more--> ## What's wrong with callback_data? I assume you already know the Bot API. To understand the issue better, let's look at some examples. I'll write the code in Scala, but the basic idea works with any programming language. <%= render "alert", type: "info", message: "Some frameworks restrict access to that parameter and manage it manually. If it's your case, than this article isn't for you 🌞" %> Imagine you have a bot, which can manage a list of something. Let’s say it’s a list of goods: - Your application has a `/ls` command, which makes a bot response with a message with numbered list of goods and `inline_keyboard` to choose a user to interact with. - Each button has `callback_data` set to `info_${id}` value, where `${id}` is a good id. - When user clicks on button, your bot responds with a message that contains information about the chosen good and an `inline_keyboard` with buttons like "Delete", "Buy", and "Assign a category". Respectively, these buttons have `callback_data` set to `delete_${id},` `buy_${id},` and `assign_category_${id}`. - When users click on "Assign category", your bot displays a hardcoded numbered list of categories to assign with, again, `inline_keyboard` with buttons to choose a category to assign. These buttons will have `callback_data` look like `assign_category_${id}_${categoryId}`, where `${id}` is a user's id and `${categoryId}` is a chosen category. - And now imagine that you also have to update your initial info message after assigning a category. Now your `callback_data` becomes at least something like `assign_category_${id}_${categoryId}_${messageId}` 🤡. While it looks okay to handle something like `info_${id}`, `buy_${id}`, and `remove_${id}`, more complex scenarios like `assign_category_${id}_${categoryId}_${messageId}` look weird and are hard to manage, especially when you have a lot of such scenarios. Moreover, evil users could probably attempt to hack your application by passing unexpected `callback_data` content, and it's much easier for them to do if you use such a plain and straightforward format. Well, you must check access regardless of the format you're using, but still, protecting your format is another security wall. ## How to fix this mess using protobuf + base85 In my [Advanced Link Saver][2] bot ([a small article about tech stack][3]), I have a lot of complex scenarios, and handling them was a nightmare. That's why I have implemented the following approach to manage `callback_data`: - Describe every callback using a protobuf message. - `callback_data` is now not the plain string like `info_${id}`, but base85-encoded protobuf bytes. - Handlers are trying to decode base85 messages and then parse an underlying protobuf message. - And then you match it against type-safe protobuf messages. <%= render "alert", type: "info", message: "base85 (also known as [ASCII85](https://wikipedia.org/wiki/ASCII85)) is just a way to encode bytes into a string. You can also use old-good base64 here, but base85 is more size-efficient. It can matter because the `callback_data` size is limited to 64 bytes." %> Let's see how it looks in code. For example, I have callbacks for viewing information about links and about categories. My protobuf descriptors look so: ```protobuf message InfoCategory { uint32 categoryId = 1; } message InfoLink { uint32 linkId = 1; } message Info { oneof callbackData { InfoLink infoLink = 1; InfoCategory infoCategory = 2; } } ``` Next, everything we need is to decode base85 from string to bytes and then try to decode it into a protobuf message. You can do it in any programming language, but in my case such handler looks like so in Scala: ```scala class InfoCallbackHandler() { override def handle(callback: CallbackQuery) = ( callback .data // this variable is a plain `callback_data` string .flatMap(ProtobufUtils.fromBase85String[Info]) .map(_.callbackData) ) match { case Some(Info.CallbackData.InfoCategory(InfoCategory(categoryId))) => // ... case Some(Info.CallbackData.InfoLink(InfoLink(linkId))) => // ... case _ => ZIO.fail(new IllegalArgumentException()) } } ``` That's how I fill `callback_data` variable on buttons: ```scala val infoButton = InlineKeyboardButton( text = "Info", callbackData = Some( ProtobufUtils.toBase85String( Info( Info.CallbackData.InfoLink(InfoLink(link.id /* int */)) ) ) ) ) ``` And here is the codec itself, but it seems interesting only to Scala developers. Actually, it just decodes/encodes protobuf messages from/to base85 data: ```scala // Using https://github.com/fzakaria/ascii85 to decode/encode base85 data import com.github.fzakaria.ascii85.Ascii85 // And https://scalapb.github.io to generate Scala classes from protobuf messages import scalapb.GeneratedMessage import scalapb.GeneratedMessageCompanion object ProtobufUtils { def fromBase85String[Message <: GeneratedMessage](value: String)(implicit mComp: GeneratedMessageCompanion[Message] ): Option[Message] = mComp.validate(Ascii85.decode(value)).toOption def toBase85String[Message <: GeneratedMessage](s: Message)(implicit mComp: GeneratedMessageCompanion[Message] ): String = Ascii85.encode(mComp.toByteArray(s)) } ``` ## Conclusion As you can see, now everything is type-safe, well-organized, better secured, and there is no room for mistakes. Unlike the base approach where you have to parse arbitrary strings using regular expressions or `startsWith`, where you have to remember how to construct these strings and do it correctly. <!-- prettier-ignore-start --> [1]: https://core.telegram.org/bots/api#callbackquery [2]: https://t.me/sp_link_saver_bot [3]: /2023/09/08/link-saver-bot-for-telegram/ <!-- prettier-ignore-end --> --- # Article: [Работа с callback_data в Telegram-боте с использованием protobuf + base85](/ru/2025/02/05/enhanced-telegram-callback-data/) Если Вы когда-либо разрабатывали Telegram-бота, Вы наверняка знаете, что такое [callback_data][1]. Если нет, вкратце, это произвольная строка, которая привязывается к кнопкам в чате, при помощи которой на бэкенде Вы определяете, какая именно кнопка была нажата. Когда Ваш бот масштабируется, скорее всего управление значениями `callback_data` превращается в "кашу". По крайней мере, так произошло у меня. Поэтому сегодня я хочу поделиться с Вами практикой по организации этой всей "каши" в красивый и органичный код. <!--more--> ## Что не так с callback_data? Здесь я рассчитываю, что Вы уже знакомы с Bot API. Чтобы лучше понять проблему, давайте рассмотрим несколько примеров. Везде далее я пишу код на Scala, но в целом, думаю, подход может быть применен с использованием любого ЯП. <%= render "alert", type: "info", message: "Некоторые фреймворки даже не дают разработчику доступ к этому параметру и менеджат его самостоятельно. Если у Вас такой фреймворк, то эта статья в целом не для вас 🌞" %> Представьте, у Вас есть бот, который может менеджить список чего-либо. Например, список товаров: - У Вашего приложения есть команда `/ls`, которая выводит сообщение с нумерованным списком товаров, и клавиатурой `inline_keyboard` для выбора товара (кнопки "1", "2", "3" и так далее). - У каждой кнопки клавиатуры `callback_data` задан в значение `info_${id}`, где `${id}` это идентификатор товара. - Когда выбирает товар, нажимая кнопку, бот отвечает сообщением, которое содержит информацию о выбранном товаре и клавиатуру `inline_keyboard` с кнопками "Delete", "Buy", "Assign a category". Каждая из кнопок имеет `callback_data` установленный в значения `delete_${id},` `buy_${id},` `assign_category_${id}` соответственно. - Когда пользователь нажимает на кнопку "Assign category", бот отображает захардкоженный нумерованный список категорий, которые можно присвоить товару, опять же, с клавиатурой `inline_keyboard` для выбора категории по номеру. - Каждая из этих кнопок имеет `callback_data`, который уже будет выглядеть как `assign_category_${id}_${categoryId}`, где `${id}` и `${categoryId}` это идентификаторы товара и категории соответственно. - И теперь представьте, что Вы захотели при клике еще и обновлять выведенное ранее сообщение с информацией о товаре, и тогда наш `callback_data` превращается уже во что-то типа `assign_category_${id}_${categoryId}_${messageId}` 🤡. Обработка колбэков типа `info_${id}`, `buy_${id}`, `remove_${id}`, выглядит еще нормально, но более сложные сценарии по типу `assign_category_${id}_${categoryId}_${messageId}` уже выглядят немножко громоздко и их сложно менеджить, особенно, если Вам необходимо обрабатывать много таких сценариев. Более того, если среди Ваших пользователей есть злоумышленники, они могут попробовать атаковать Вашего бота, посылая в него произвольные `callback_data`. Им будет проще использовать этот вектор атаки, если значения этого параметра никак не будут защищены. Конечно, Вы всегда должны защищаться от этого, вне зависимости от формата обработки этого параметра, но, тем не менее, дополнительная защита никогда не лишняя. ## Как пофиксить эту "кашу" с использованием protobuf + base85 В моем боте [Advanced Link Saver][2] ([небольшая статья про используемый стэк технологий][3]) реализовано достаточно много сложных сценариев взаимодействия и обработка их всех была ну прям затруднена. Поэтому я пришел к этому, немного экзотическому, методу обработки параметра `callback_data`: - Каждый колбэк описывается через protobuf-месседж. - Значение `callback_data` теперь не просто голая строка по типу `info_${id}`, а закодированный в base85 protobuf-месседж. - Все обработчики колбэков в коде теперь работают не с голыми строками по регэспу, а ожидают там base85 строку, которую они пытаются задекодить в определенный protobuf-месседж. - И если получилось уже все задекодить и распарсить в какой-то объект, то уже далее идет типизированный мэтч по возможным значениям этого объекта. <%= render "alert", type: "info", message: "base85 (также известный как [ASCII85](https://wikipedia.org/wiki/ASCII85)) это просто алгоритм энкодинга голых байтов в строку. Вы также можете использовать старый-добрый base64, но base85, хоть и менее распространен, является чуть более эффективным в плане размера выходной строки. Это имеет значение, так как размер значения `callback_data` ограничен 64 байтами." %> Давайте теперь посмотрим, как это все выглядит в коде. Например, у меня есть колбэки для просмотра информации о сохраненных закладках и категориях. Мои protobuf-месседжи для них выглядят следующим образом: ```protobuf message InfoCategory { uint32 categoryId = 1; } message InfoLink { uint32 linkId = 1; } message Info { oneof callbackData { InfoLink infoLink = 1; InfoCategory infoCategory = 2; } } ``` И далее все что нужно - это декодировать из base85 в набор байтов и попробовать этот набор байтов декодировать в месседж protobuf. Можно сделать это на любом языке, но в моем случае обработчик этих коллбэков на Scala выглядит так: ```scala class InfoCallbackHandler() { override def handle(callback: CallbackQuery) = ( callback .data // this variable is a plain `callback_data` string .flatMap(ProtobufUtils.fromBase85String[Info]) .map(_.callbackData) ) match { case Some(Info.CallbackData.InfoCategory(InfoCategory(categoryId))) => // ... case Some(Info.CallbackData.InfoLink(InfoLink(linkId))) => // ... case _ => ZIO.fail(new IllegalArgumentException()) } } ``` А вот так я заполняю значения `callback_data` на кнопках: ```scala val infoButton = InlineKeyboardButton( text = "Info", callbackData = Some( ProtobufUtils.toBase85String( Info( Info.CallbackData.InfoLink(InfoLink(link.id /* int */)) ) ) ) ) ``` А вот и сам кодек, но, полагаю, это будет понятно только Scala-разработчикам. На самом деле, это просто енкодинг/декодинг protobuf-месседжа в/из base85: ```scala // Using https://github.com/fzakaria/ascii85 to decode/encode base85 data import com.github.fzakaria.ascii85.Ascii85 // And https://scalapb.github.io to generate Scala classes from protobuf messages import scalapb.GeneratedMessage import scalapb.GeneratedMessageCompanion object ProtobufUtils { def fromBase85String[Message <: GeneratedMessage](value: String)(implicit mComp: GeneratedMessageCompanion[Message] ): Option[Message] = mComp.validate(Ascii85.decode(value)).toOption def toBase85String[Message <: GeneratedMessage](s: Message)(implicit mComp: GeneratedMessageCompanion[Message] ): String = Ascii85.encode(mComp.toByteArray(s)) } ``` ## Заключение Как Вы можете видеть, теперь все типизировано, красиво разложено по коду, чуть более секьюрно и здесь теперь нет даже спэйса для ошибок. В отличие от обычного подхода, где Вы формируете голые строки и должны следить за тем, чтобы они были корректно сформированы, корректно распарсены и чтобы между парсерами не было конфликтов. И подписывайтесь на мой <a href="<%= site.metadata.author.tg_channel_link %>">🛫 Telegram-канал</a>, кстати. Там куча всяких мыслей про разработку, которые не подходят под формат блога. Спасибо! <!-- prettier-ignore-start --> [1]: https://core.telegram.org/bots/api#callbackquery [2]: https://t.me/sp_link_saver_bot [3]: /2023/09/08/link-saver-bot-for-telegram/ <!-- prettier-ignore-end --> --- # Article: [Migration from Jekyll to Bridgetown](/2024/12/04/migration-from-jekyll-to-bridgetown/) For a long time, this website has been built using [Jekyll][35]. Jekyll does its job quite well. It has a mature, battle-tested codebase, a lot of available plugins, a fairly large community, and hundreds of websites are still running Jekyll. You can do really a lot of things with it, and you are likely to be satisfied. However, there are some problems that you might not notice at first glance, like I did. Moreover, I hadn't noticed them for a long time until I started running into issues with further website development. <!--more--> ## What's wrong with Jekyll? Shortly, it's a little bit "outdated". In every way. It lacks maintenance, it doesn't release new major features, its plugins are usually too outdated, and if you want to implement something special that isn't covered by tutorials or documentation, you'll likely have a difficult time. If you're cool with Jekyll and you just write stuff, then its "outdated-ness" might not bother you. Jekyll might still work for you, as it does for many others. I started noticing that I was missing something in Jekyll when I began to improve my website not just by writing but also by implementing new features. Well, what exactly motivated me to move forward: - If you need some frontend features in your projects, it's not an easy task with Jekyll. Your website is probably doomed to be coded in vanilla JavaScript. - Here is also going CSS processing: if you're looking for minifying and sourcemaps, or maybe adding some CSS packages, it's better to forget about trying to deal with it in Jekyll. - You have to deal with outdated gems (or Ruby itself) everywhere; usually, forking something is the only way to workaround it. Sure, you can say that it's not a Jekyll problem, but lacking out-of-the-box features heavily encourages you to use plugins. With time moving on, it becomes harder. Dancing around version conflicts is now your daily routine. - For example, [jekyll-assets][1], the only mature plugin to process assets, is abandoned (last commit was 4 years ago). To use it today, you have to fork it and update, but it's not an easy task to do. Surely, its' functionality, like at least hashes in CSS files, somehow should be bundled in Jekyll itself, but it's not. - Another example, [jekyll-spaceship][2], which is Jekyll swissknife with many useful functions, is abandoned too. You should either update it yourself, or consider copy-n-paste some functionality directly to your project. And, as I found lately, there are a lot of articles about current Jekyll state and I'm not the one who is worried about: - [Is the Jekyll project dead?][5] - [Future of Jekyll project (engine behind GitHub Pages) in doubt?][3] - [Jekyll and the Genesis of the Jamstack][4]. ## Why Bridgetown? So, I decided to move on with Jekyll. These days there are really a lot of static website generators: [Hugo][6], [Next.js][9], [Nuxt][10], [11ty][7], [Gatsby][8] and so on. They are actively developing, also have a large community, tons of plugins and bundled features, and probably if you're starting from scratch, it's better not to start with Jekyll. All of them have their pros and cons, so consider carefully, because once you choose something, it would be with you for a long time and migration probably will be painful. [Bridgetown][11], Jekyll's fork, is among the alternatives. Although it's far from the most popular solution, I bet it's the most painless thing to migrate from Jekyll. What does Bridgetown offer? Tons of new features, new ideas, and new possibilities. I'm not going to describe [all of them][18], but here are some notable: - "Easy" migration for Jekyll adopters. "Easy" because you'll probably still have to rewrite everything from nearly scratch, but still Bridgetown and Jekyll have many things in common, so you already know how things work here. It's better than, for example, migrating into a completely unknown JavaScript framework. - Integration with [esbuild][12], [npm][13] and other things from the frontend world. No more restrictions with JavaScript and CSS. - Templates are no longer restricted with [Liquid][14]. You can use [ERB][15] or even some exotic less-known options. - [Extended plugins API][16]. - [Bundled support for API endpoints and file-based routing][17], in case if you need some. With truly static websites, you're usually not, but now at least you have such ability. - [Expressive documentation][19]. There is really a lot of documentation. And much, much more. You should just dive into the documentation to get it known yourself. Also, be aware that Bridgetown 2.0 is coming soon ([2.0.0-beta3][20] was just released), so if you're just starting, stick to the ["edge" documentation][21], as long as it comes with a lot of breaking changes. ## Migration <%= render "alert", type: "info", message: "There is [an official guide on how to do it](https://www.bridgetownrb.com/docs/migrating/jekyll). If you're going to migrate, be sure to visit it first." %> Migration took me weeks until everything worked as before, but it definitely worth it in perspective. Shortly, I just created a new project, moved all my markdown content there, and started restoring functionality. The only big problem which I encountered is that there is nothing like [jekyll_picture_tag][23] for automatic responsive image generation. All other commonly-used plugins from the Jekyll world are already implemented: [bridgetown-feed][24], [bridgetown-related-posts][25], [bridgetown-seo-tag][26], [bridgetown-sitemap][27]. Also, here you can find a curated [list of official and third-party plugins][28]. Frontend integration works great, there were no unsolvable problems with it. JavaScript and CSS files are processed like they should be. As example, I easily configured [open-props][29] and [PurgeCSS][30] which would be a nightmare on Jekyll. Also, I would like to highlight [HTML and XML Inspectors][31] API, which allows you to easily manipulate your HTML output. With this thing, you can painlessly add custom code to your generated pages, which was extremely painful with Jekyll. ### Meet bridgetown_picture_tag As I said, the only problem was the missing plugin for generating responsive images. So, now it's here: [seroperson/bridgetown_picture_tag][32]. It's a fork of `jekyll_picture_tag`, but adapted for Bridgetown. Everything works completely the same, so you can [follow the docs from the original library][33]. Also, I added [a working Bridgetown example][34] in tests. To get it into your project: ```ruby gem "bridgetown_picture_tag", git: "https://github.com/seroperson/bridgetown_picture_tag.git" ``` <%= picture 'images/2024-12-04-bridgetown/pekingese.webp 5:4' %> _Just a beautiful doggy (webp cropped to 5:4 ratio with various responsive sizes)_ However, I'm not an involved Ruby developer and did it just to satisfy my website's needs, so I'm not going to actively develop this. I still wait for a native solution or maybe for someone who will maintain it nicely. I saw that original author [was willing to unify this solution to work for both Jekyll and Bridgetown][36], so probably that's what we need. I created [an issue][37] there, maybe it will result in something. <!-- prettier-ignore-start --> [1]: https://github.com/envygeeks/jekyll-assets [2]: https://github.com/jeffreytse/jekyll-spaceship [3]: https://www.theregister.com/2021/09/14/future_of_jekyll_project_engine/ [4]: https://www.bridgetownrb.com/future/rip-jekyll/ [5]: https://talk.jekyllrb.com/t/is-the-jekyll-project-dead/6820/8 [6]: https://gohugo.io [7]: https://www.11ty.dev [8]: https://www.gatsbyjs.com [9]: https://nextjs.org [10]: https://nuxt.com [11]: https://www.bridgetownrb.com/news/time-to-visit-bridgetown/ [12]: https://esbuild.github.io [13]: https://www.npmjs.com [14]: https://shopify.github.io/liquid/ [15]: https://github.com/ruby/erb [16]: https://www.bridgetownrb.com/docs/plugins [17]: https://www.bridgetownrb.com/docs/routes [18]: https://www.bridgetownrb.com/docs/migrating/features-since-jekyll [19]: https://www.bridgetownrb.com/docs [20]: https://www.bridgetownrb.com/release/bridgetown-2.0-beta-3-with-better-performance/ [21]: https://edge.bridgetownrb.com [22]: https://www.bridgetownrb.com/docs/migrating/jekyll [23]: https://github.com/rbuchberger/jekyll_picture_tag [24]: https://github.com/bridgetownrb/bridgetown-feed [25]: https://github.com/mpclarkson/bridgetown-related-posts [26]: https://github.com/bridgetownrb/bridgetown-seo-tag [27]: https://github.com/ayushn21/bridgetown-sitemap [28]: https://www.bridgetownrb.com/plugins/ [29]: https://www.bridgetownrb.com/docs/bundled-configurations#open-props [30]: https://www.bridgetownrb.com/docs/bundled-configurations#purgecss-post-build-hook [31]: https://www.bridgetownrb.com/docs/plugins/inspectors [32]: https://github.com/seroperson/bridgetown_picture_tag [33]: https://rbuchberger.github.io/jekyll_picture_tag/ [34]: https://github.com/seroperson/bridgetown_picture_tag/tree/61f30c8c8388aeacaeb030d30b6ae0780e7faf26/test/fixtures [35]: https://jekyllrb.com [36]: https://www.reddit.com/r/Jekyll/comments/i1h1di/comment/fzz88rh/ [37]: https://github.com/rbuchberger/jekyll_picture_tag/issues/319 <!-- prettier-ignore-end --> --- # Article: [Gaming with v2rayN (or any other VLESS proxy)](/2024/11/11/gaming-with-v2rayn-nekoray-vless/) I have been using XRay + VLESS proxy for a long time without any problems until recently. With the TUN mode enabled and process filtering configured inside [v2rayN][2] (or [Nekoray][3]), it was sufficient to satisfy daily routine web surfing. Everything went well until [Discord was blocked in Russia][1]. After I attempted to run proxy with TUN mode enabled to make Discord work, and play some games at the same time, it turned out that not every game runs well: increased ping, packet loss, some functionality broken and so on. <!--more--> <%= render "alert", type: "warning", message: "We're talking exactly about **non-proxified** gaming, when you just need to keep your v2rayN running to proxify, for example, Discord, but not your game. If you have your game blocked and you need to proxify it, that's not a case this article is about. In this case you would need to either run a game via proxy (which usually isn't possible) or run it with TUN-mode, which breaks things, as I'll show further in this article." %> While there were no problems with Discord itself (always should be easy to configure if your proxy is UDP-compatible, like XRay + VLESS + v2rayN combination), some other applications appear to be broken when the TUN mode is enabled. For me, it's only games that are affected, but I've heard it also breaks other specific processes. Here are some known examples: [Hunt: Showdown][4], [Valorant][5], Lost Ark (tested it myself), etc. This makes it unstable to use process filtering and the TUN mode to proxify Discord while playing non-proxified game. The latter will simply not work (or it will, but with problems or some functionality unavailable). ## What's wrong exactly Firstly, let's see what TUN is according to [Wikipedia][6]: > TUN, namely network TUNnel, simulates a network layer device and operates in > layer 3 carrying IP packets. TAP, namely network TAP, simulates a link layer > device and operates in layer 2 carrying Ethernet frames. TUN is used with > routing. TAP can be used to create a user space network bridge. In short, when you use the TUN mode in v2rayN, it creates a virtual interface that now receives all the traffic and then decides whether to proxy it or send it directly. Even if your rules aren't proxying your game's traffic, it still goes through a virtual interface, which likely increases latency, causes drops and undefined behavior depending on the game's implementation details. A game could be incompatible with the TUN mode for various reasons. One such reason, which I noticed in Hunt: Showdown, is the inability to correctly ping servers. [It's known that you can't ping behind a proxy][7], and when the TUN mode is enabled, as mentioned above, everything goes through a proxy even if your rules don't actually proxy it (although I'm not sure whether this is really the reason for 0ms ping, or if it's just because the TUN code is poorly written). So, summing up, to avoid disrupting your gaming experience, **you should avoid enabling the TUN mode during your gaming session**. ## Life without TUN mode Let's examine some concrete case: I need to proxy Discord, which is blocked in my country, using v2rayN on Windows. This must be done without the TUN mode to not break the game that I'm running at the same time. Several options are available. ### Set proxy for Discord at application-level Why at all do we need the TUN mode to proxy Discord? Perhaps because it's much easier to do than specify a proxy at the application level. On OSX or Linux you can simply pass the `--proxy-server` parameter, but on Windows, you have to use more complicated solutions. I tried some of these, but only one actually worked for me. All of these inject custom DLLs to Discord process, so no third-party software is needed: - [runetfreedom/discord-voice-proxy][10] - this one worked for me. - [aiqinxuancai/discord-proxy][8] and [hdrover/discord-drover][9] - both looks pretty much the same, but probably they're a bit outdated because my Discord installation doesn't have `version.dll` file at all, probably that's why they didn't work for me. ### Use third-party software to route traffic If you're unable to set a proxy at the application level, you can try using some third-party software to achieve the same results. Several options are also available here: - [wiresock/proxifyre][14] - partially open-source, works, but a little bit non-user-friendly: configuration via `.json`, no GUI, no installer. Also requires [Windows Packet Filter][15] driver to be installed. - [ProxyCap][12] - paid, works, but breaks some applications and requires additional actions to fix them (see their [FAQ][13], question "_WSL stops working after installing ProxyCap_"). - [Proxifier][11] - paid and isn't suitable fully because it doesn't support UDP. ## Conclusion That way you can get rid of the TUN mode to not break your gaming experience. One or another option should be suitable for any application, not just Discord. See also my note [🪟 Mastering Windows performance][16] for additional gaming improvements. Good luck on the battlefield! <!-- prettier-ignore-start --> [1]: https://www.rbc.ru/technology_and_media/08/10/2024/67054cbf9a79474670135b84 [2]: https://github.com/2dust/v2rayN [3]: https://github.com/MatsuriDayo/nekoray [4]: https://github.com/MatsuriDayo/nekoray/issues/1463 [5]: https://www.reddit.com/r/ValorantTechSupport/comments/yjpg5x/playing_valorant_with_v2rayvlessws/ [6]: https://en.wikipedia.org/wiki/TUN/TAP [7]: https://superuser.com/a/175441 [8]: https://github.com/aiqinxuancai/discord-proxy [9]: https://github.com/hdrover/discord-drover [10]: https://github.com/runetfreedom/discord-voice-proxy [11]: https://www.proxifier.com [12]: https://www.proxycap.com [13]: https://www.proxycap.com/faq.html [14]: https://github.com/wiresock/proxifyre [15]: https://github.com/wiresock/ndisapi/releases [16]: /2022/11/28/mastering-windows-performance/ <!-- prettier-ignore-end --> --- # Article: [Игра с включенным v2rayN (или любой другой VLESS прокси)](/ru/2024/11/11/gaming-with-v2rayn-nekoray-vless/) Я долго использовал XRay + VLESS без всяких проблем до недавнего времени. Я включал TUN-режим и изкоробочный фильтр по процессу внутри [v2rayN][2] (или [Nekoray][3]) и этого было достаточно для повседневного вэб-серфинга до тех пор, пока [Discord не заблокировали в России][1]. После того как я попробовал запустить игру параллельно с VLESS прокси (в TUN режиме для работы Discord), оказалось, что далеко не каждая игра работает корректно: увеличенный пинг, потери пакетов, некоторые функции не работают вовсе и так далее. <!--more--> <%= render "alert", type: "warning", message: "Мы говорим здесь конкретно про случай, когда игра **не заблокирована** и нам нужна прокси просто для проксирования, например, Discord, а не самой игры. Если Вам необходимо проксировать трафик именно игры, то эта статья не совсем об этом. В этом случае необходимо либо игру явно пускать через проксю (что очень редко возможно), либо играть с TUN-режимом, который, как я далее покажу, может ломать некоторые другие приложения в системе." %> Нет никаких проблем, если вы просто сидите в Discord с включенным прокси (в TUN режиме). Если ваш прокси поддерживает проксирование UDP трафика, как, например, связка XRay + VLESS + v2rayN, то в Discord будет работать войс и вообще все. Проблемы появляются когда вам нужно параллельно играть в какую-либо игру, потому что, как уже сказал выше, не каждая игра нормально работает с TUN режимом. В целом это относится не только к играм, но и в теории к любым приложениям, но в моем случае я это заметил только с ними. Например, вот игры, которые точно "не дружат" с прокси в TUN режиме: [Hunt: Showdown][4], [Valorant][5], Lost Ark (протестировал сам). Эта проблема делает невозможной (или как минимум нестабильной) игру параллельно с включенным в TUN режиме прокси для работы Discord. ## Что конкретно не так с TUN режимом Сначала давайте посмотрим, что вообще такое TUN режим согласно [Википедии][6]: > TUN, namely network TUNnel, simulates a network layer device and operates in > layer 3 carrying IP packets. TAP, namely network TAP, simulates a link layer > device and operates in layer 2 carrying Ethernet frames. TUN is used with > routing. TAP can be used to create a user space network bridge. Вкратце, когда вы запускаете TUN режим в, например, v2rayN, программа создает виртуальный интерфейс, через который теперь проходит **весь** ваш трафик. То есть вообще весь, даже тот, который по итогу пойдет не через прокси, а напрямую. Весь трафик попадает сначала в этот виртуальный интерфейс, и уже оттуда роутится либо в сам, собственно, прокси, либо идет напрямую. То есть даже если в правилах v2rayN вы не прописывали процесс игры, все равно весь ее трафик дополнительно пройдет через виртуальный интерфейс TUN и уже после этого пойдет по обычному маршруту, из-за чего и появляются такие проблемы как повышенный пинг, потери пакетов и так далее. Возможный набор проблем зависит от реализации конкретной игры. Некоторые игры не работают с TUN режимом вовсе, и причин на это может быть множество. Одна из них, которую я заметил в Hunt: Showdown, это отсутствие возможности корректно пинговать сервера. [Известно, что Вы не можете послать пинг находясь за прокси][7], а когда TUN режим включен, как я описал выше, весь трафик по сути проходит через прокси (в каком-то виде), даже если по правилам он идет по итогу напрямую. Поэтому, например, Hunt: Showdown с включенным TUN режимом показывает пинг до всех серверов 0ms и поэтому отображает их как недоступные, не давая возможности пройти дальше меню. Поэтому, резюмируя, чтобы минимизировать проблемы во время игры, **Вам необходимо избегать TUN режима**. ## Жизнь без TUN режима Давайте рассмотрим конкретный кейс: нам необходимо проксировать Discord (TCP + UDP), используя v2rayN на Windows. Это мы хотим делать без TUN режима, чтобы не сломать игру, которую мы запускаем параллельно. Есть несколько опций. ### Направить Discord в прокси на уровне самого приложения Почему нам вообще нужен TUN режим для проксирования Discord? Собственно, потому что это намного легче, чем добиться того, чтобы Discord сам слал свой трафик через прокси. На OSX или Linux Вы можете просто задать параметр `--proxy-server`, но на Windows эта функция изкоробки отсутствует и приходится пользоваться обходными путями. Я попробовал некоторые из них, но конкретно у меня сработало только одно. Каждое из решений инжектит в процесс кастомный DLL со всей логикой, поэтому никаких сторонних приложений запускать не нужно: - [runetfreedom/discord-voice-proxy][10] - это сработало у меня. - [aiqinxuancai/discord-proxy][8] и [hdrover/discord-drover][9] - оба выглядят очень похоже, но возможно они немного устарели, потому что у меня вообще нет файла `version.dll`, который они подменяют. Но в целом можете попробовать. ### Использовать сторонний софт для роутинга трафика в прокси Если у Вас не получилось направить трафик в прокси из самого приложения, Вы можете попробовать установить сторонний софт. Обычно все они работают так, что в системе появляется какой-то драйвер, который управляет тем, куда какой трафик отроутить. Несколько вариантов, которые я нашел: - [wiresock/proxifyre][14] - частично опен-сорс, сработало в моем случае, но сама программа не очень юзер-френдли: настройка через `.json`, отсутствие графического интерфейса, нет инсталлятора. Также требует отдельной установки драйвера [Windows Packet Filter][15]. - [ProxyCap][12] - сработало в моем случае, но решение платное и иногда ломает некоторые приложения, которые нужно фиксить руками (см. их [FAQ][13], вопрос "_WSL stops working after installing ProxyCap_"). - [Proxifier][11] - платное и не подходит в конкретном случае, так как не умеет роутить UDP трафик. ## Заключение Таким образом вы можете избавиться от необходимости запускать прокси в TUN режиме. То или иное решение Вам обязательно подойдет, вне зависимости от того, речь про Discord или нет. Удачи на поле боя! И подписывайтесь на мой <a href="<%= site.metadata.author.tg_channel_link %>">🛫 Telegram-канал</a>, кстати. Там куча всяких мыслей про разработку, которые не подходят под формат блога. Спасибо! <!-- prettier-ignore-start --> [1]: https://www.rbc.ru/technology_and_media/08/10/2024/67054cbf9a79474670135b84 [2]: https://github.com/2dust/v2rayN [3]: https://github.com/MatsuriDayo/nekoray [4]: https://github.com/MatsuriDayo/nekoray/issues/1463 [5]: https://www.reddit.com/r/ValorantTechSupport/comments/yjpg5x/playing_valorant_with_v2rayvlessws/ [6]: https://en.wikipedia.org/wiki/TUN/TAP [7]: https://superuser.com/a/175441 [8]: https://github.com/aiqinxuancai/discord-proxy [9]: https://github.com/hdrover/discord-drover [10]: https://github.com/runetfreedom/discord-voice-proxy [11]: https://www.proxifier.com [12]: https://www.proxycap.com [13]: https://www.proxycap.com/faq.html [14]: https://github.com/wiresock/proxifyre [15]: https://github.com/wiresock/ndisapi/releases [16]: /2022/11/28/mastering-windows-performance/ <!-- prettier-ignore-end --> --- # Article: [Building Jekyll website with Nix](/2024/08/03/building-jekyll-website-with-nix/) For a long time, I have been building my Jekyll website on CI using Docker. Plain Ruby image, installing dependencies, caching, running Jekyll build. On local machine I used to manage Ruby development environment. This is a working solution, and generally there is nothing wrong with it. However, right now, I am continuously migrating everything to Nix, and this website is no exception. My environment is [mostly managed by Nix][1]. I also had a great experience using it to [build a real project][9], and now it's time to move to Nix to build this website. In addition to all the benefits of Nix, it's also good thing to get rid of another "out-of-store" dependency like Docker 👺. <!--more--> Next I assume that you already have Nix installed. ## Alternative configurations There are some articles about building Jekyll using Nix, such as: - [Build a Jekyll blog with Nix using flakes][2] - [Using Jekyll and Nix to blog][3] - [Building a Jekyll Environment with NixOS][4] However, all of them are a little bit outdated. Still, there is some interesting information to read, but nowadays there is much more easier way to configure Jekyll + Nix, so read on. ## How it will look like After setting things up, you can use the following commands to interact with your Jekyll website (serve, build and so on): ```bash # defaults to `jekyll build` nix run . # `nix run . args` expands to `jekyll args` nix run . serve # `nix run .#bundle args` expands to `bundle args` nix run .#bundle lock # `nix run .#bundix -- args` expands to `bundix args` nix run .#bundix -- -m # Enters development shell with all dependencies installed nix develop ``` Sounds nasty: no `Dockerfile` configuration, no global dependencies, use the single tool to develop locally and build on CI. So, let's check the necessary Nix configurations which you need to fill in. ## Jekyll + Nix configuration Everything you need is a single file `flake.nix` which looks like that: ```nix { inputs = { nixpkgs.url = "github:NixOS/nixpkgs"; bundix = { url = "github:inscapist/bundix/main"; inputs.nixpkgs.follows = "nixpkgs"; }; ruby-nix = { url = "github:inscapist/ruby-nix"; inputs.nixpkgs.follows = "nixpkgs"; }; }; outputs = { self, nixpkgs, bundix, ruby-nix }: let system = "x86_64-linux"; pkgs = import nixpkgs { inherit system; overlays = [ ruby-nix.overlays.ruby ]; }; rubyNix = ruby-nix.lib pkgs; bundixcli = bundix.packages.${system}.default; deps = with pkgs; [ env ruby bundixcli ]; inherit (rubyNix { name = "seroperson.gitlab.io"; gemset = ./gemset.nix; gemConfig = pkgs.defaultGemConfig; }) env ruby; in { packages.${system} = let bundlecli = pkgs.writeShellApplication { name = "bundle"; runtimeInputs = deps; text = '' export BUNDLE_PATH=vendor/bundle bundle "$@" ''; }; jekyll = pkgs.writeShellApplication { name = "jekyll"; runtimeInputs = deps; text = '' if [ $# -eq 0 ]; then jekyll build else jekyll "$@" fi ''; }; in { jekyll = jekyll; bundle = bundlecli; bundix = bundixcli; default = jekyll; }; devShells.${system}.default = pkgs.mkShell { shellHook = '' export BUNDLE_PATH=vendor/bundle ''; buildInputs = deps; }; }; } ``` In short, it's just the default [ruby-nix][5] configuration and some Nix aliases. Nothing more, actually. In fact, the minimal Nix configuration for Jekyll is even shorter, but those aliases are so useful that I decided to include them here too. When `flake.nix` is created, you will need to generate a `gemset.nix` file by running the following command: ```bash nix run .#bundix -- -m ``` `bundix` is a tool which pins dependencies for Nix according to `Gemfile.lock`. Be sure to put `gemset.nix` under a git control and further remember to keep your `Gemfile.lock` and `gemset.nix` files synchronized. Finally, don’t forget to edit your Jekyll `_config.yml` to not publish your nix files: ```yaml # ... exclude: - gemset.nix - flake.nix - flake.lock # ... ``` That’s it. Now everything should work. ## Additional configuration for plugins with native libraries If you are using the [jekyll_picture_tag][6] (with `libvips`) or any other native library, you may also need to add some additional configuration to make everything work. Many libraries are already covered by Nix ([with a bunch of hacks][7]), and mostly they just work, but sometimes strange things happen. If errors like `Could not open library` occur, this is probably your case. The solution differs for each case, but the general approach is the same: in the package sources, you should replace the default native library references with nixified references. That’s how it looks like for `ruby-vips`: ```nix { inputs = { # ... }; outputs = { # ... }: let # ... inherit (rubyNix { # ... gemConfig = pkgs.defaultGemConfig // { ruby-vips = attrs: { postInstall = '' cd "$(cat $out/nix-support/gem-meta/install-path)" substituteInPlace lib/vips.rb \ --replace "library_name('vips', 42)" '"${pkgs.vips.out}/lib/libvips${pkgs.stdenv.hostPlatform.extensions.sharedLibrary}"' \ --replace "library_name('glib-2.0', 0)" '"${pkgs.glib.out}/lib/libglib-2.0${pkgs.stdenv.hostPlatform.extensions.sharedLibrary}"' \ --replace "library_name('gobject-2.0', 0)" '"${pkgs.glib.out}/lib/libgobject-2.0${pkgs.stdenv.hostPlatform.extensions.sharedLibrary}"' ''; }; }; }) env ruby; in { # ... }; } ``` ## Configuring CI with nixified Jekyll Well, now we are building our Jekyll website using Nix. We are building it locally, so why not build it in the same way on CI? This allows us to use the same environment on both CI and the local machine, eliminating redundant CI-only configuration and simplifying the pipeline. That’s how it looks like for GitLab CI using [cynerd/gitlab-ci-nix][8]: ```yaml .nix: image: registry.gitlab.com/cynerd/gitlab-ci-nix cache: key: "nix" paths: - ".nix-cache" before_script: - gitlab-ci-nix-cache-before after_script: - gitlab-ci-nix-cache-after pages: extends: .nix stage: deploy script: - nix run . -- build -d public && gzip -k -6 $(find public -type f) artifacts: paths: - public ``` I'm sure there is an analogue image for GitHub CI, but honestly, I haven't tested any. <!-- prettier-ignore-start --> [1]: /2024/01/16/managing-dotfiles-with-nix/ [2]: https://litchipi.github.io/nix/2023/01/12/build-jekyll-blog-with-nix.html [3]: https://nathan.gs/2019/04/19/using-jekyll-and-nix-to-blog/ [4]: https://stesie.github.io/2016/08/nixos-github-pages-env [5]: https://github.com/inscapist/ruby-nix [6]: /2023/11/16/notion-jekyll-images-synchronization/ [7]: https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/ruby-modules/gem-config/default.nix [8]: https://gitlab.com/Cynerd/gitlab-ci-nix [9]: /2023/09/08/link-saver-bot-for-telegram/ <!-- prettier-ignore-end --> --- # Article: [Using a heavyweight JS/TS library in a JVM project](/2024/05/20/using-a-heavyweight-js-ts-library-in-a-jvm-project/) Today I want to share some details about the implementation of my recent project, the Scala library [urlopt4s][1]. This library provides a simple interface for filtering advertising and tracking parameters from a URL. It doesn't contain any filtering code itself, but rather uses the [AdGuard][2] adblocker engine under the hood. So, in addition to its’ main function, this library can be treated as a proof-of-concept for running any (almost) kind of JavaScript code on the JVM. <!--more--> ## What exactly does it do? To make things clear, let’s first see what exactly this library does. So, to get started with it, all you need to do is add a dependency: ```scala libraryDependencies += "me.seroperson" %% "urlopt4s" % "0.1.0" ``` And then do something like this to initialize an instance and run the filtering: ```scala // ... object ExampleApp extends IOApp { override def run(args: List[String]): IO[ExitCode] = UrlOptimizer[IO]() .use { urlOptimizer => for { result <- urlOptimizer.removeAdQueryParams("https://www.google.com/?utm_source=test") _ <- IO.println(result) // "https://www.google.com/" } yield ExitCode.Success } } ``` The resulting URL is unlikely to contain any advertisements or tracking parameters. Packaging this functionality in a library may seem excessive, but reading [this section][3] will explain why simple filtering by a predefined set of known advertisement and tracking parameters does not work well. ### How exactly does it work? Under the hood, it runs JavaScript and loads the [AdGuard adblocker engine][4] with an optimized set of filtering rules to perform the dirty work and clean a given URL. This is a really large JS library with many dependencies, including TypeScript and modern JavaScript APIs, and _urlopt4s_ is a good proof of concept that we can run even this on the JVM (via GraalJS) right now. [Read this for more information][5] about implementation details. ## TL;DR JS on JVM Still, there are some restrictions and a few tweaks required to make everything work. I'm not going to provide a complete guide here because it's better to check the code, but here's a short summary of how to run non-hello-world JavaScript code on the JVM: - Create JS module, define webpack and package configurations, place there your JS code and export functions which you need to call from JVM. - Build an optimized bundle, place it as a resource inside of your JVM project. - Initialize GraalJS context, load JS bundle and get a pointer to previously exported function. - Wrap everything into a neat Scala interface, which just calls JS code. ## Additional tips on implementing JS on JVM Leaving here my experience which could be helpful: - [GraalJS documentation][6] (and [this][7]) has really a lot of useful things. - Usually you want to pack your JS code as a CommonJS module and set all related GraalJS properties, such as `js.commonjs-require-cwd` and `js.commonjs-require` ([docs][8]), as it makes using modern JS libraries easier. - Sometimes your JS code will work on i.e. NodeJS, but not on GraalJS. There are many reasons why such behaviour can occur, but highly likely it would be some APIs which GraalJS is missing. In this case you will encounter errors like `ReferenceError: TextDecoder is not defined` (or `ReferenceError: TextEncoder is not defined`, or some `is not defined` else). To deal with it you will have to add polyfills to your JS bundle with, for example, webpack [ProvidePlugin][9]. - When using webpack to build your JS code as a module and error like `ReferenceError: global is not defined` occurs, probably you have to set [globalObject][10] property to `globalThis`. - It is a good idea to restrict IO permissions for your GraalJS context (like [this][11]). <!-- prettier-ignore-start --> [1]: https://github.com/seroperson/urlopt4s [2]: https://github.com/AdguardTeam [3]: https://github.com/seroperson/urlopt4s#preface [4]: https://github.com/AdguardTeam/tsurlfilter [5]: https://github.com/seroperson/urlopt4s/?tab=readme-ov-file#implementation-details [6]: https://github.com/oracle/graaljs/tree/master/docs/user [7]: https://www.graalvm.org/latest/reference-manual/js/ [8]: https://www.graalvm.org/latest/reference-manual/js/Modules/#ecmascript-modules-esm [9]: https://webpack.js.org/plugins/provide-plugin/ [10]: https://webpack.js.org/configuration/output/#outputglobalobject [11]: https://github.com/seroperson/urlopt4s/blob/394ab89d4c4c320d4476be05e61b2a34391daf02/urlopt4s/src/me/seroperson/urlopt4s/UrlOptimizer.scala#L63 <!-- prettier-ignore-end --> --- # Article: [Managing dotfiles with Nix](/2024/01/16/managing-dotfiles-with-nix/) I've been managing **[⚙️ my dotfiles][1]** for quite a while now. Initially, it was just a collection of configuration files that required manual setup, with the hope that everything would function properly. I've always been in search of a better way to organize them, and I believe I've finally found one. <!--more--> At first glance, managing dotfiles seems like an easy task. But as you dive deeper, you may encounter many problems. Picture this: you're a novice developer who has decided to share his dotfiles (which include `zsh`, `vim`, and other items) by creating a GitHub repository. Everything goes smoothly until you have to "install" these dotfiles on another machine. Here are some potential problems you might face after cloning the repository: - Each configuration file must be placed (or “installed”) in corresponding location. For example, `dotfiles/.vimrc` should go to `$HOME/.vimrc`, `dotfiles/.zshrc` to `$HOME/.zshrc` and so on. - All the dependencies must be installed. - (_optionally_) Should be OS-independent. - (_optionally_) Be able to uninstall dotfiles and all its’ dependencies. - (_optionally_) It should be easy to try / to install (take a look at my article on this topic: **[🔍 Previewing nix dotfiles](/2025/05/26/previewing-nix-managed-dotfiles)**). Initially, I managed my dotfiles repository manually within a bare git repository, without using any management tool. Although this approach worked, it required significant manual effort. Every time I deployed dotfiles, I faced new challenges, which prompted me to improve my dotfiles management. That's how I started using Nix for all tasks. It's worth noting that Nix can be challenging to learn and use. Coding and debugging can be frustrating. However, its features make the effort worthwhile, which is why we're using it here. If you decide to continue with it, you might find it beneficial in other applications such as NixOS, reproducible CI builds, and isolated project environments. Let's examine step-by-step how we can address the mentioned problems using Nix, and why we should choose it for these tasks. I won't detail every point as, to be honest, I’m still in the process of implementing them myself, but rest assured, all of them are certainly feasible. ## Nix vs. well-known solutions There are numerous solutions designed to alleviate the issues of “dotfiles hell”, but none of them are perfect. Some options include [chemzoi][2], [ydm][3], [GNU Stow][4], [dotbot][5]. Using any of these tools is preferable to using none at all, as a bare git repository requires even more manual effort. While these tools can simplify the process, none of them is a silver bullet. Using Nix, specifically [home-manager][6], for dotfiles management has several advantages over other tools: - Your environment is defined by code, not configuration files or manual interactions. - Nix includes a package manager, allowing you to describe not only how to install your dotfiles, but also their dependencies. The [repository][7] is extensive and should cover all your needs. If a package isn't available, there are numerous ways to install it from a git repository or another source. - Everything is isolated so your system isn't cluttered with dotfiles dependencies. However, to use Nix correctly, you'll still need to write a fair amount of code. Despite this, it is less error-prone and less complicated than manual shell scripting. ## Presequences This article doesn't aim to duplicate existing guides for installing Nix; instead, I recommend referring to the following well-curated guides: - [Official Nix reference manual][8]. - Installing Nix: [official quick guide][9], [official detailed guide][10], [advanced community-driven guide][11], and [ArchLinux guide][12]. - [Installing home-manager as a standalone "flake"][13]. Think of a flake as a way to define a Nix package. This guide provides a bootstrap configuration to start with. Also, after installing Nix be sure to create file `$HOME/.config/nix/nix.conf` with the following content to be comfortable while using Nix’s CLI: ```shell experimental-features = nix-command flakes ``` ## Managing dotfiles with Nix After following all the instructions provided, you will have a folder with home-manager's configuration that looks like this: ```shell $ tree $HOME/.dotfiles -a -I .git /home/seroperson/.dotfiles ├── .config/ # My actual dotfiles (zsh, nvim, git etc confs) ├── flake.lock ├── flake.nix └── home.nix ``` In this example, we're storing all our dotfiles in the `$HOME/.dotfiles/.config` folder (you may want to rename it) which we will link to their actual locations. The `flake.nix` contains some flake configurations, but currently, the `home.nix` is the most interesting to look at. It contains the home-manager configuration where we will define all our packages and other settings. My `home.nix` looks like so: ```nix { callPackage, config, pkgs, ... }: { # ... # Some default configuration # ... home.packages = [ pkgs.git pkgs.neovim pkgs.zsh # ... ]; home.file.".zshenv" = { source = config.lib.file.mkOutOfStoreSymlink "${config.home.homeDirectory}/.dotfiles/.config/zsh/.zshenv"; }; xdg.configFile = { "zsh" = { source = config.lib.file.mkOutOfStoreSymlink "${config.home.homeDirectory}/.dotfiles/.config/zsh"; recursive = true; }; # ... }; } ``` So, let’s check step-by-step what do we see here: ```nix home.packages = [ pkgs.git pkgs.neovim pkgs.zsh # ... ]; ``` This is where we describe our packages to be installed from the [nixpkgs repository][14]. If a package is not available in the repository (which is rare), you can define your own. However, this requires some advanced Nix knowledge. If you plan to do so, the following guides are a good starting point: [Our First Derivation][15], [Derivations][16]. Let’s look at next snippet: ```nix home.file.".zshenv" = { source = config.lib.file.mkOutOfStoreSymlink "${config.home.homeDirectory}/.dotfiles/.config/zsh/.zshenv"; }; xdg.configFile = { "zsh" = { source = config.lib.file.mkOutOfStoreSymlink "${config.home.homeDirectory}/.dotfiles/.config/zsh"; recursive = true; }; # ... }; ``` This section of the code links our configuration files to their expected paths. Specifically, it links `$HOME/.dotfiles/.config/zsh/.zshenv` (which contains my zsh configuration) to `$HOME/.zshenv` and the `$HOME/.dotfiles/.config/zsh` folder to `$XDG_CONFIG_HOME/zsh` (or `$HOME/.config/zsh`). This basic setup is sufficient for configuring simple dotfiles. To incorporate all your `*.nix` changes, execute the following command (this command may vary depending on how you followed the previous guides): ```shell nix run home-manager/release-23.11 -- init --switch $HOME/.dotfiles/ ``` ## Configuring things in Nix-way An **important** thing to note about our method is that it isn’t the true-Nix-style, often referred to as "impure". We link our actual dotfiles to their locations via Nix, which allows for direct and easy editing, much like traditional methods. For example, `vim $HOME/.zshenv` will open our `$HOME/.dotfiles/.config/zsh/.zshenv`. We can then edit, save, close, then open a new zsh instance or reload the current one and voilà - your changes have been applied. Doing things in Nix-way, all configuration content should be "immutable". Ideally, all your configuration should reside inside `.nix` files. This eliminates the need for standalone files such as `tmux.conf`, `.zshrc`, `git/config`, and so on. However, having everything inside `.nix` files means you'll need to rebuild home-manager (run the command above) after each edit. This process removes the convenience of hot-reloads within a shell or editor, making edits more cumbersome. Despite this, there are some advantages. Let’s look how to define something in Nix-way. For example, some popular tools have built-in DSL to build a config: ```nix # home.nix programs.git = { enable = true; userName = "seroperson"; userEmail = "seroperson@gmail.com"; }; ``` It results in file `$HOME/.config/git/config`: ```shell [user] email = "seroperson@gmail.com" name = "seroperson" ``` Sometimes it may come in handy, as this config now can be literally coded. As an example, you can code how to resolve `userName` variable and it won’t be static. Of course, you also will have to rebuild home-manager after each edit to make everything applied. There are numerous predefined programs available, including zsh, fish, vim, neovim, tmux and others. You can search for them [here][17] and [here][18] (right inside of home-manager git repository). ## Configuring tmux Let's say we need to set up the `tmux.conf` configuration file, which has some plugins installed via the [tmux plugin manager][19]. If you're using a bare git repository to manage your dotfiles, you'll likely have some shell code to `git clone` the plugin manager repository. You'll also need code to check if it's already been cloned. Ideally, you'd want to automatically install all the plugins defined in `tmux.conf`, however, it isn't easy to accomplish with shell code. That's where using the Nix API to configure things becomes a preferable option. Here's what the result might look like: ```shell # nix/tmux.nix { pkgs, ... }: let # tmux-yank = pkgs.tmuxPlugins.mkTmuxPlugin { # pluginName = "tmux-yank"; # version = "2.3.0"; # src = pkgs.fetchFromGitHub { # owner = "tmux-plugins"; # repo = "tmux-yank"; # rev = "acfd36e4fcba99f8310a7dfb432111c242fe7392"; # sha256 = ""; # }; # }; in { programs.tmux = { enable = true; historyLimit = 100000; keyMode = "vi"; escapeTime = 0; baseIndex = 1; plugins = with pkgs; [ { plugin = tmuxPlugins.catppuccin; extraConfig = '' set -g @catppuccin_flavour 'mocha' # ... set -g @catppuccin_date_time_text "%H:%M:%S" ''; } ]; extraConfig = '' set -g prefix C-t bind n next-window bind p previous-window # ... ''; }; } # home.nix # ... { imports = [ ./nix/tmux.nix ]; } ``` You can now safely remove tmux.conf. It will be automatically built, and all plugins will also be installed automatically. The only uncomfortable things now are that you have to rebuild home-manager after each edit and also it is not so comfortable to edit `.nix` file with bundled configuration comparing to plain `tmux.conf`. ## Configuring AstroNvim <%= render "alert", type: "warning", message: "This section is outdated a little bit as it was written for AstroNvim < v4." %> Next, we'll examine how to configure the community-driven nvim distribution, [AstroNvim][20], which I personally use. In short, the installation involves cloning the distribution repository to `$HOME/.config/nvim/` and then adding your custom options to `$HOME/.config/astronvim/lua/user/`. If you are manually managing your dotfiles, you will most likely need to clone the repository via shell code and then check again if it has already been cloned or perhaps use a git submodule, which I don't particularly favor. To do everything with Nix, you simply have to: ```shell # home.nix { # ... xdg.configFile = { # ... "nvim" = { source = pkgs.fetchFromGitHub { owner = "AstroNvim"; repo = "AstroNvim"; rev = "271c9c3f71c2e315cb16c31276dec81ddca6a5a6"; sha256 = "h019vKDgaOk0VL+bnAPOUoAL8VAkhY6MGDbqEy+uAKg="; }; }; # AstronVim allows you to separate custom configuration from repository itself # docs.astronvim.com/configuration/manage_user_config/#setting-up-a-user-configuration "astronvim/lua/user" = { source = config.lib.file.mkOutOfStoreSymlink "${config.home.homeDirectory}/.dotfiles/.config/astronvim"; recursive = true; }; }; } ``` My `$HOME/.dotfiles/.config/astronvim` directory contains the `init.lua`, `options.lua`, and `mappings.lua` files, which hold all the custom configuration for AstroNvim. The distribution will automatically be cloned by Nix during the next home-manager rebuild. It's worth noting that there's also a [nixvim][21] distribution, a nixified version of nvim. However, it's entirely up to you whether to use it or not. ## Conclusion So, that's it. Nix can be tricky sometimes, but as stated earlier, the effort pays off with its features. My dotfiles are far from ideal as they require quite a bit of time to perfect, still there are points (which I described in the beginning of article) to be implemented, but even in their current state, everything feels much better than manual management. <!-- prettier-ignore-start --> [1]: https://github.com/seroperson/dotfiles [2]: https://www.chezmoi.io/ [3]: https://yadm.io/ [4]: https://www.gnu.org/software/stow/ [5]: https://github.com/anishathalye/dotbot [6]: https://github.com/nix-community/home-manager [7]: https://search.nixos.org/packages [8]: https://nixos.org/manual/nix/unstable/introduction [9]: https://nixos.org/download.html#nix-install-linux [10]: https://nixos.org/manual/nix/unstable/installation/installing-binary [11]: https://nixos.wiki/wiki/Nix_Installation_Guide [12]: https://wiki.archlinux.org/title/Nix [13]: https://nix-community.github.io/home-manager/index.xhtml#sec-flakes-standalone [14]: https://search.nixos.org/packages [15]: https://nixos.org/guides/nix-pills/our-first-derivation.html [16]: https://nixos.org/manual/nix/stable/language/derivations [17]: https://mipmip.github.io/home-manager-option-search/ [18]: https://github.com/nix-community/home-manager/tree/master/modules/programs [19]: https://github.com/tmux-plugins/tpm [20]: https://astronvim.com/ [21]: https://github.com/nix-community/nixvim <!-- prettier-ignore-end --> --- # Article: [Implementing a GraalVM custom Feature](/2023/12/05/implementing-a-graalvm-custom-feature/) I've been coding a GraalVM-powered Scala application for a while now, and the more complicated this project becomes, the more GraalVM-related issues arise. So, today I will show you how to implement a custom feature to enable the use of a heavily reflection-based library within a GraalVM application. <!--more--> ## What is Feature? Literally, Feature is a Java interface ([javadoc][1]) that provides methods to hook into different native-image generation stages. By implementing it, you can customize your image generation process and overcome some GraalVM restrictions within your application. When something doesn't work in the generated binary, the most common reason is dynamic features (reflection, resources, proxies, etc.). Usually, all you need to do is fill in the corresponding [reachability metadata][2], but there are some cases when implementing an appropriate custom Feature is preferable. Generally speaking, this gives you more control over the process and allows you to define by code what to do, instead of hard-coding configuration files. As an example, you can check some built-in GraalVM Features: [GsonFeature][3], [ScalaFeature][4], [JUnitFeature][5]. ## Implementing a custom Feature So, when should you implement a custom Feature? As I see, there are the following most common cases: - When reachability metadata can't cover all your needs. - When you need to dynamically decide what should be allowed to use with “dynamic features”. I’m sure there are more suitable cases to note, but that’s what I see. Next, I will show you an example of implementing a custom Feature for the [htmlunit][6] library, which I'm using in my [link saver bot][7] to fetch OpenGraph metadata for a webpage. The problem with running this library under GraalVM is that it uses reflection **really a lot**. Even using a tracing agent to generate `reflect-config.json` doesn't help here because reflection calls depend on executed JavaScript code, and it is hard to cover all possible reflection calls during a tracing session. I tried several times, but each time I got different results, and there was no guarantee that all reflection calls were covered. This case can be easily resolved by implementing a custom Feature. However, it would be even easier if the [issue][8] was fixed. But since it is still unresolved, using a custom Feature remains the only solution. ### Presequences Before we start, I would like to highlight some things about custom Features: - Custom Feature implementation can be placed right inside your project. You don’t need to separate it into a library or separate module. - It is better to avoid using many libraries inside a Feature, especially if they are used both in Feature and in project itself. It potentially leads to errors like (it literally means that you can’t initialize classes both during run-time and build-time): `Classes that should be initialized at run time got initialized during image building`. To fix it you can add `--initialize-at-build-time=className` option, but it is better to avoid such issues. - Be careful when writing your custom Features in non-Java language, especially if your project is also written in the same non-Java language. This is because of the reason mentioned in the previous point. - Be sure to add Graal SDK as compile-time dependency (latest version at [mvnrepository.com][9]): ```scala libraryDependencies += "org.graalvm.sdk" % "graal-sdk" % Version.graalvm % "provided" ``` - If you are going to interact with `java.lang.reflect`, probably it worths to add [reflections][10] library too, but it’s up to you (but be careful with this library because it uses `slf4j-api` dependency, which may break things a little bit (see second point)): ```scala libraryDependencies += "org.reflections" % "reflections" % Version.reflections % "provided" ``` - Also, just in case, it would be better to have your Graal SDK version synchronized with your GraalVM JDK version. ### Code So, finally, the code. The full listing looks like this in my case: ```java package me.seroperson.graalvm; import org.graalvm.nativeimage.hosted.Feature; import org.graalvm.nativeimage.hosted.RuntimeReflection; import org.reflections.Reflections; import org.reflections.scanners.Scanners; import java.lang.reflect.Constructor; import java.lang.reflect.Modifier; import java.util.Arrays; import java.util.function.Consumer; import java.util.stream.Stream; public class HtmlUnitFeature implements Feature { @Override public String getDescription() { return "Makes htmlunit headless browser work correctly"; } @Override public void beforeAnalysis(BeforeAnalysisAccess access) { Consumer<Class<?>> registerAll = (accessClass) -> registerClass(access, accessClass); doForEachClassInPackage( access, "org.htmlunit.corejs", "org.htmlunit.corejs.javascript.Scriptable", registerAll ); doForEachClassInPackage( access, "org.htmlunit.javascript", "org.htmlunit.corejs.javascript.Scriptable", registerAll ); doForEachClassInPackage( access, "org.htmlunit.svg", "org.w3c.dom.Element", registerAll ); Stream .of( "org.htmlunit.BrowserVersionFeatures", "org.htmlunit.corejs.javascript.jdk18.VMBridge_jdk18", "org.htmlunit.javascript.host.ConsoleCustom", "org.htmlunit.javascript.host.DateCustom", "org.htmlunit.javascript.host.NumberCustom" ) .forEach(str -> registerAll.accept(access.findClassByName(str))); } private void doForEachClassInPackage(BeforeAnalysisAccess access, String packageName, String parentName, Consumer<Class<?>> action) { Reflections reflections = new Reflections(packageName, Scanners.SubTypes); reflections .getSubTypesOf(access.findClassByName(parentName)) .forEach(action); } private void registerClass(BeforeAnalysisAccess access, Class<?> accessClass) { access.registerAsUsed(accessClass); RuntimeReflection.register(accessClass); RuntimeReflection.registerAllFields(accessClass); RuntimeReflection.registerAllConstructors(accessClass); RuntimeReflection.registerAllMethods(accessClass); if(!(accessClass.isArray() || accessClass.isInterface() || Modifier.isAbstract(accessClass.getModifiers()))) { try { @SuppressWarnings("unused") Constructor<?> nullaryConstructor = accessClass.getDeclaredConstructor(); RuntimeReflection.registerForReflectiveInstantiation(accessClass); } catch (NoSuchMethodException ignored) { } } Arrays.stream(accessClass.getMethods()) .forEach(method -> { RuntimeReflection.registerMethodLookup(accessClass, method.getName(), method.getParameterTypes()); }); Arrays.stream(accessClass.getFields()) .forEach(field -> { RuntimeReflection.registerFieldLookup(accessClass, field.getName()); access.registerAsAccessed(field); }); } } ``` Shortly, it simply enables most of the reflection features on the given packages and some standalone classes. We can't easily do it with `reflect-config.json`, so we have to do it with code. It might be a bit excessive and there is room for optimizations, but at least it is a good starting point. This way, we can also register proxies ([RuntimeProxyCreation][11]), manage resources ([RuntimeResourcesAccess][12]), handle JNI calls ([RuntimeJNIAccess][13]), and so on ([link][14]). Also it worths to mention [isInConfiguration][15] method, which allows you to dynamically disable or enable a Feature. By doing so, you can disable your Feature if such a library is not present in the classpath at all. That’s how built-in `ScalaFeature` works: ```java public class ScalaFeature implements InternalFeature { // ... @Override public boolean isInConfiguration(IsInConfigurationAccess access) { return access.findClassByName("scala.Predef") != null; } // ... } ``` There are dozens of stages available to hook into besides `beforeAnalysis`. It is quite possible that something else would suit your needs better. So, the only thing left is to add the corresponding build parameter in order to enable the Feature (snippet for `sbt`, but I think you’ll figure out): ```scala nativeImageOptions ++= Seq( // Enables htmlunit reflection "--features=me.seroperson.graalvm.HtmlUnitFeature" ) ``` That’s it! Now you can try to start build, your output should contain a list of applied features somewhere, like so: ```text 4 user-specific feature(s): - com.oracle.svm.polyglot.scala.ScalaFeature - com.oracle.svm.thirdparty.gson.GsonFeature - me.seroperson.graalvm.HtmlUnitFeature: Makes htmlunit headless browser work correctly - org.graalvm.home.HomeFinderFeature: Finds GraalVM paths and its version number ``` Probably you should now test it a little bit and ensure everything works. <!-- prettier-ignore-start --> [1]: https://www.graalvm.org/sdk/javadoc/org/graalvm/nativeimage/hosted/Feature.html [2]: https://www.graalvm.org/latest/reference-manual/native-image/metadata/ [3]: https://github.com/oracle/graal/blob/de9f8d54051936f6749a8d8514288fe00684d501/substratevm/src/com.oracle.svm.thirdparty/src/com/oracle/svm/thirdparty/gson/GsonFeature.java [4]: https://github.com/oracle/graal/blob/de9f8d54051936f6749a8d8514288fe00684d501/substratevm/src/com.oracle.svm.polyglot/src/com/oracle/svm/polyglot/scala/ScalaFeature.java [5]: https://github.com/oracle/graal/blob/de9f8d54051936f6749a8d8514288fe00684d501/substratevm/src/com.oracle.svm.junit/src/com/oracle/svm/junit/JUnitFeature.java [6]: https://github.com/HtmlUnit/htmlunit [7]: /2023/09/08/link-saver-bot-for-telegram/ [8]: https://github.com/oracle/graal/issues/1236 [9]: https://mvnrepository.com/artifact/org.graalvm.sdk/graal-sdk [10]: https://github.com/ronmamo/reflections [11]: https://www.graalvm.org/sdk/javadoc/org/graalvm/nativeimage/hosted/RuntimeProxyCreation.html [12]: https://www.graalvm.org/sdk/javadoc/org/graalvm/nativeimage/hosted/RuntimeResourceAccess.html [13]: https://www.graalvm.org/sdk/javadoc/org/graalvm/nativeimage/hosted/RuntimeJNIAccess.html [14]: https://www.graalvm.org/sdk/javadoc/org/graalvm/nativeimage/hosted/package-summary.html [15]: https://www.graalvm.org/sdk/javadoc/org/graalvm/nativeimage/hosted/Feature.html#isInConfiguration(org.graalvm.nativeimage.hosted.Feature.IsInConfigurationAccess) <!-- prettier-ignore-end --> --- # Article: [Notion + Jekyll images synchronization](/2023/11/16/notion-jekyll-images-synchronization/) Recently, I successfully set up [Notion + Jekyll synchronization][1] using the [jekyll-fetch-notion][2] plugin. It has been meeting all my needs until my previous post, where I needed to attach some images. Images must be fetched from Notion and stored under git conrol, so here is a recipe for appeared problem. <!--more--> <%= render "alert", type: "warning", message: "Since September 2024 [Notion is no longer works for Russia users](https://www.notion.so/help/restrictions-for-customers-based-in-russia). I have archived the plugin and don't support it anymore. However, you can still use it; I think it will continue to work without any changes for a long time." %> ## What is the problem I will remind you how our synchronization works. So, we have a pipeline called `.gitlab-ci.yml`, which (briefly): - Runs the `jekyll fetch_notion` command, which pulls everything from Notion, converts it to `.md`, and places everything inside your repository. - Stages all the newly pulled `.md` files, performs a `git-commit` for any new changes, and `git-push` them. - A new commit in the repository then triggers another pipeline, which builds and deploys the site. It works great when our posts don't have any images. The problem with images is that Notion generates a unique short-lived URL for them every time, and after a few minutes, these URLs stop working. This problem is relevant not only to the git-based Notion synchronization approach. You can solve it without any additional code just by embedding images via a direct URL. Then, Notion won't save them on their servers, and the output will always be the same. By doing so, you will always need to manually publish images to your site before posting. Well, it sounds uncomfortable, but at least it works. ## How to solve it right I prefer to follow a git-based approach here as well and save images to the repository using the `fetch_notion` command. All we need to do is monkey-patch the handling of the `image` block. Here is an example of how to work with block handling in general and how to handle custom blocks that are not handled by default: [Embedding videos with jekyll-notion][3]. So, we are overriding the `image` block handling like this: ```ruby require 'open-uri' module NotionToMd module Blocks class Types class << self def image(block) type = block[:type].to_sym url = URI.parse(block.dig(type, :url)) # we can also retrieve a caption here like this: # caption = convert_caption(block) # https://example.com/filename.jpg?queryParams -> filename.jpg filename = "assets/#{url.to_s.split('/')[-1].split('?')[0]}" IO.copy_stream(url.open, filename) Jekyll.logger.info("Image #{File.absolute_path(filename)} #{"OK".green}") return "![](#{filename})" # or you can use jekyll_picture_tag plugin and return liquid # tag "picture" here, which automatically generates responsive # images for you. end end end end end ``` You should put it into `_plugins/notion_to_md/blocks/types.rb` (or any other `.rb` file inside `_plugins` directory) to make it works. Also you must include the following lines to `Gemfile`: ```ruby # Fetching files easily gem 'open-uri' # if you are going to use responsive images plugin group :jekyll_plugins do # ... # github.com/rbuchberger/jekyll_picture_tag gem 'jekyll_picture_tag', '2.0.4' end ``` That’s it! Now your images will be stored in a repository during the `fetch_notion` command. <%= picture 'images/2023-11-16-notion-img-sync/build-fetch-logs.png' %> ## Side note: How to make monkey-patching available for jekyll commands One more thing to note is that monkey-patching is not available for every single Jekyll command by default. To make it work, your command must initialize a `Jekyll::Site` object during the processing; otherwise, your `.rb` won't be loaded. That's how I did it for the `fetch_notion` command: ```ruby module JekyllNotion class FetchCommand < Jekyll::Command def self.init_with_program(p) p.command(:fetch_notion) do |c| # ... c.action do |args, options| process(args, options) end end end def self.process(args = [], options = {}) @config = configuration_from_options(options) # ... # requires plugins (and _plugins/ directory) to be able to # define custom notion_to_md blocks via monkey-patching site = Jekyll::Site.new(@config) # ... end end ``` Just left this note here because actually, it was hard to find this out as long as it isn't documented anywhere. <!-- prettier-ignore-start --> [1]: /2023/08/26/yet-another-way-to-establish-notion-jekyll-synchronization/ [2]: https://github.com/seroperson/jekyll-fetch-notion [3]: https://enrq.me/dev/2023/03/31/embedding-videos-with-jekyll-notion/ <!-- prettier-ignore-end --> --- # Article: [Civilization 5 multiplayer modding](/2023/11/10/civilization-5-multiplayer-modding/) I have been involved in developing a Civilization 5 modification for a while now, and today I would like to discuss the game state, promote some popular projects, and highlight my own contributions in this field at times. <!--more--> ## Brief introduction [Civilization 5][23] is a turn-based strategy game that was released in 2010. In the game, players take on the role of a leader of a civilization and guide it through various eras, from ancient times to the modern age. I think everyone who reads this article already knows what this game is. The very last official patch for V was dated [October 27, 2014][1]. Since then, Civilization VI has become the main focus of franchise development, and there are even rumors about VII. However, there are still [many people playing Civilization V][2]. Of course, the online is much lower than VI, but it is still quite high as for a game that has not received official patches for almost ten years. ## Modders community One of the main reasons why it is still popular is the great and wide modding community. There are about 10k+ mods available in the [Steam Workshop][3] and around 500k+ messages in the modding subforum at [civfanatics.com][4]. Mods make this game feel fresh and exciting to play. As of now, almost everything can be changed. The core library, which contains all the logic, is open-sourced (only the Windows DLL version, but at least that). Everything else can be changed by editing `.lua`, `.xml`, `.sql`, and other files. ## Vox Populi (also known as VP) As I mentioned, there are many mods available, but currently the "central" mod is [Vox Populi][5] ([civfanatics subforum][6], [discord channel][7]). It brings together numerous developers who are focused on the single mod with the single course. The repository for this mod's code currently has around 7k commits and 250 stars, which is quite impressive as for a whatever game mod. If you are looking for a new Civilization 5 experience, this mod is a good place to start. It brings a lot of new features, gameplay mechanics, balancing, and more, while still maintaining the "style" of the game: it is still the "old" good Civilization 5. Some other notable mods to mention (there are actually many more good mods available, but I'm just mentioning a few): [4UC][8] (adds more unique components to each civilization), [More Wonders][9] (adds more wonders, surprisingly), [Even More Resources][10] (adds more resources). If you are going to try, be sure to follow instructions on [how to play Civilization 5 + VP][11] and [guide about mod installation in general][12]. And another one [forum thread about multiplayer VP][22]. One thing to note is that it is not recommended to install mods via Steam Workshop as they often contain outdated versions. Instead, use `.zip` archives from forum and GitHub. # What and how am I doing there I have been playing Civilization 5 occasionally since its release, but only in multiplayer sessions. Some years ago, we decided to try modded multiplayer (with plain VP, around 2018) and it ran quite well. However, when we attempted to play again in 2023, multiplayer VP was literally unplayable. Since then, I have been contributing to VP to make it compatible with multiplayer. ## Multiplayer compatibility Until recently, there was no understanding of how to make mods completely multiplayer-compatible. After some research, I discovered that there are actually no major issues with it; just some basic rules must be followed. Understanding how multiplayer works exactly has helped in defining these rules. Imagine we started session with several humans and several AIs. Multiplayer session works like so: - Each human player has their own game state (unit positions, founded cities, assigned citizens, and so on). There are a lot of in-game variables that are part of the game state. - Each human player calculates its own state by themselves after and during each turn. This includes AI processing logic, fight results, path routing, and so on. Every calculation is done at client side, and there is no server that runs any logic for clients. - For example, the game's core logic **must** work identically on each end when processing how AI behaves at the beginning of a turn. If there is a flaw in the logic and the AI moves a unit to position _(1;0)_ at player A's end, but at player B's end the AI has moved the same unit to _(2;0)_, then a desync occurs. Such flaws in logic are the main reason why mods usually aren't multiplayer-compatible. - Human actions (such as "move unit", "delete unit", "make road", "build something in city") are broadcasted to other humans via network messages. - Each human handles such network messages and modifies its own state. <%= picture 'images/2023-11-10-civilization/civ5-mp.webp' %> - After processing each network message, each human must have an identical game state. Otherwise, the game will continue to run in a "desynced" state, which can lead to crashes and undefined behaviors. - Under certain circumstances, the host can decide to "re-sync" everyone, so that every player will have the same state as the host. However, if there is some multiplayer-incompatible logic in a mod installed, it still doesn't guarantee further stability. Make modification multiplayer-compatible means code it so that: - Each human will calculate its’ own game state identically. - Any human actions are network-broadcasted to other players. ### Example of code which causes multiplayer incompatibility For a long time, VP was coded without multiplayer in mind. As a result, it worked perfectly in singleplayer mode, but multiplayer didn't work at all. The reason for such incompatibility was the logic that resulted in a different game state on each end. Notable examples of such logic: - [#9768][13]: `sort` usage, which doesn’t guarantee consistent order for equal objects in C++. - [#10112][14]: using `set` with undefined sorting order. - [#9867][15]: using pointers as `map` keys. - [#10250][16]: using the same cache for both UI and core logic processing. - [#9767][17], [#9970][18]: using player ID during calculations. More examples along with all my contributions done are [here][19]. From time to time some exotic cases encounter, but the root cause for incompatibility is always the same: some code results in different game state on each end. ## Reverse-engineering Firaxis has open-sourced the Windows DLL that contains the game's core logic, including AI processing, path algorithms, diplomacy and so on. While many things can be changed in this DLL, there is still a significant amount of code that cannot be modified. The code responsible for rendering, UI, camera interaction, network bytes transfer, and Steam integration are located directly inside the executable file and hasn’t been open-sourced. Although this limits some modding possibilities, it is still commendable that at least a portion has been made available to the community. So, while we are able to build only some part of the game, the following problems appear: - The code is locked on Visual Studio 2008 and we are unable to use many modern C++ features. - All the compiled libraries and executable are 32-bit, which results in limited memory consumption. - Inability to fix closed-source bugs and memory leaks. - Inability to mod any behavior inside executable (rendering, network and so on). Moreover, executables are CEG-protected (deprecated, but still working Steam DRM protection), which makes it impossible to patch them and distribute patched versions along with modification content. However, there are some good news too: - Although executable is CEG-protected, we still can patch it in runtime. It is easy to implement because we already managing DLL code which injects there. - Windows executable is not obfuscated with some advanced techniques and it is not so hard to make patches there. - Finally, I’m unsure whether it was intended or not, but Linux executable ships with preserved namings and virtual tables. By investigating it, it is easy to match its’ named functions with the same inside stripped Windows executable. <%= picture 'images/2023-11-10-civilization/ghidra-civ5-mp.webp' %> So, it means we actually can change behaviors inside executable and, furthermore, it is not so hard to do. As a result, a [simple PoC that shows it is possible][20] has been written. Here, we are modifying an underlying variable inside the executable that indicates whether or not a force re-sync was scheduled. By tying everything together with some buttons, we have a brand-new feature that works by interacting with closed-source executable code. Such method was not previously used in Civilization 5 modding (however, interaction with the executable was also previously implemented in a solid project [MPPatch][21]), and now it opens up a wide range of new possibilities. ## Conclusion So, nothing else to say here, just promoting the project and my participation. If you were a Civilization 5 player somewhere in the past, give it a try with mods, it definitely worths it. <!-- prettier-ignore-start --> [1]: https://store.steampowered.com/news/app/8930/view/2912096327579037004 [2]: https://steamcharts.com/app/8930 [3]: https://steamcommunity.com/workshop/browse/?appid=8930 [4]: https://forums.civfanatics.com/categories/civilization-v.385/ [5]: https://github.com/LoneGazebo/Community-Patch-DLL [6]: https://forums.civfanatics.com/forums/community-patch-project.497/ [7]: https://discord.com/invite/KbgmCRU [8]: https://forums.civfanatics.com/resources/more-unique-components-for-vox-populi.26966/ [9]: https://forums.civfanatics.com/threads/poll-more-wonders-for-vp.653498/ [10]: https://forums.civfanatics.com/threads/even-more-resources-for-vox-populi.654431/ [11]: https://github.com/LoneGazebo/Community-Patch-DLL/#how-can-i-play-this [12]: https://jfdmodding.fandom.com/wiki/Civ_V_Mod_Installation [13]: https://github.com/LoneGazebo/Community-Patch-DLL/issues/9768#issuecomment-1521206665 [14]: https://github.com/LoneGazebo/Community-Patch-DLL/pull/10112 [15]: https://github.com/LoneGazebo/Community-Patch-DLL/pull/9867 [16]: https://github.com/LoneGazebo/Community-Patch-DLL/pull/10250 [17]: https://github.com/LoneGazebo/Community-Patch-DLL/pull/9767 [18]: https://github.com/LoneGazebo/Community-Patch-DLL/pull/9970 [19]: https://github.com/LoneGazebo/Community-Patch-DLL/pulls?q=is%3Apr%20is%3Aclosed%20author%3Aseroperson [20]: https://github.com/LoneGazebo/Community-Patch-DLL/pull/10281 [21]: https://github.com/Lymia/MPPatch [22]: https://forums.civfanatics.com/threads/voxpopuli-modpacks-update-4-22.685164/ [23]: https://en.wikipedia.org/wiki/Civilization_V <!-- prettier-ignore-end --> --- # Article: [Link saver bot for Telegram](/2023/09/08/link-saver-bot-for-telegram/) I’m excited to present my recent pet-project that has finally arrived! Here I will describe how it works and which technologies are used internally. <!--more--> ## What is it? **[It is a Telegram bot][1]** 🛫 that stores your links and provides simple management interface. I had been looking for something like that for a long time because I usually use Telegram to quickly store or sync something between devices. Although there are many bookmark managers available, as well as the “Saved Messages” chat with a “Links” menu, they are uncomfortable for me for various reasons. ## What does it can and how to use it? The application's functionality is pretty simple. As for now you can: - Save a link by sending it to the chat. - List all saved links by typing either `/ls` or `/list` command. - Remove a stored link by selecting it from the `/ls` output and pressing the `Remove` button. In the future, I also plan to add some simple category management and improve the user-friendliness of the `/ls` output. After that, I believe the project will be complete as long as I don’t want to bloat the application with dozens of functions. Let it be simple, but still quite useful. ## Conclusion I'm open to any feedback and suggestions for features and/or changes. <!-- prettier-ignore-start --> [1]: https://t.me/sp_link_saver_bot [2]: https://zio.dev/ [3]: https://github.com/bot4s/telegram [4]: https://scala-slick.org/ [5]: https://flywaydb.org/ [6]: /2023/08/31/using-scala-with-graalvm/ [7]: https://nixos.org/ [8]: https://fluxcd.io/flux/ <!-- prettier-ignore-end --> --- # Article: [Using Scala with GraalVM](/2023/08/31/using-scala-with-graalvm/) Recently, I successfully migrated my Scala project to GraalVM’s native build. GraalVM is undeservedly unpopular in the Scala community and it is rare to see it mentioned anywhere. This article explains how to properly configure GraalVM and why you might want to try Scala on GraalVM. <!--more--> ## What is GraalVM? In short, GraalVM makes it easy to build a native binary file instead of a `.jar`. This results in faster startup and overall performance, and a slimmer and more secure application that does not require a JRE to run. For more information, visit their [official website][1]. There are also some [limitations][2], but they usually can be bypassed with just some additional configuration. And finally, all of the above also applies to applications written in Scala, which is why we're here. ## Why not scala-native? If your application is light enough and doesn't consist of many dependencies, then `scala-native` would be better choice. Overall it builds faster, runs faster and requires less memory. But if your application do have many dependencies, then pray that all of them are published for `scala-native` target and that there are no any JVM exotic anywhere under the hood. Because every used Scala library must be built for `scala-native` target, otherwise your application even won't build. In short: GraalVM works nearly always, `scala-native` works only if you're very lucky. ## How to configure GraalVM with Scala Configuring a build that produces native binaries is relatively easy. If your project is simple enough, everything should work out-of-the-box. For more complex projects, additional configuration may be necessary. So, let's start from scratch. We create empty project with: ```shell $ sbt new scala/scala-seed.g8 ``` Next, we add plugin which provides GraalVM support: ```scala // project/plugins.sbt addSbtPlugin("org.scalameta" % "sbt-native-image" % "0.3.4") ``` Next, set up the final configuration. Actually, you can just enable `NativeImagePlugin` without any additional settings, but I recommend to add some which I listed below. You also can read more about parameters [here][3]. ```scala // build.sbt // ... lazy val nativeBuildSettings = Seq( nativeImageOptions ++= Seq( // "fallback" means producing a file which requires JVM to run // GraalVM switches to a fallback if it's unable to generate JVM-free native image // This option disables such behavior and fails a build instead "--no-fallback", // Using static linking instead of dynamic one // Allows you to run your binary in even scratch containers "--static", // Makes image building output more verbose "--verbose", // Provides more detail if something goes wrong "-H:+ReportExceptionStackTraces" ) ) lazy val root = (project in file(".")) .enablePlugins(NativeImagePlugin) .settings( // ... ) .settings(nativeBuildSettings) ``` By the way, you don't need to install GraalVM itself. The plugin will install it for you. However, if you want to use a pre-installed GraalVM, you can specify the environment value `GRAALVM_INSTALLED` as `true`. Finally, running the build: ```shell $ sbt nativeImage ... [info] Native image ready! [info] /home/seroperson/graalvm-hello/target/native-image/graalvm-hello [success] Total time: 44 s, completed Aug 30, 2023 8:43:49 PM ``` Here is your native binary. You can run it and see the result: ```shell $ ./target/native-image/graalvm-hello hello ``` It is ready to be packed and shipped to some production. ## What’s wrong with it <%= render "alert", type: "info", message: "**TL;DR**: If your project is more than a basic \"hello-world\", be prepared to spend some time fixing GraalVM-related problems." %> Well, it runs well until we have just a little hello-world project. However, when a project becomes larger, some problems may appear. Usually they are solvable, but still. So, here are trade-offs which I noticed: - Firstly, it is worth mentioning that GraalVM is not as popular in the Scala community as it is in the Java community. In the Java-world GraalVM support is implemented by leading web frameworks, such as [Spring][4] and [Quarkus][5]. Conversely, looks like in the Scala-world there are not so many people who at least ever heard about it. I haven't come across any Scala framework with mentioned GraalVM support or something like that. - Build time increases **significantly**. Even roughly comparing plain-jar and native binary building, I got **3 sec** vs **42 sec** on empty project. My a slightly larger project, with around 5,000 lines of code, takes 15 minutes to build using GraalVM. That’s why you will probably need to configure plain-jar build for debugging purposes and native build for production. However, it is still necessary to test everything on a native build, as some functionality may work in plain-jar but not in native builds. - As mentioned earlier, there are some limitations to GraalVM's native binary building, such as reflection, proxies, JNI and so on. If you are configuring a complex project, it probably won’t work out-of-the-box due to **libraries using dynamic features**. Mostly it is the main reason if something does not work after migrating to GraalVM. However, you can usually bypass these limitations by following the steps described [here][6]. These steps involve adding `.json` files (known as "metadata") to the build, which describe the dynamic features used in the application. If something is misconfigured here, the application may not work properly. - There is a [metadata repository][7] that contains ready-to-use configurations for popular libraries. Java-world plugins automatically download files from this repository, but this feature is not yet implemented in `sbt-native-image`. - In addition, some metadata can be generated automatically using the [trace agent][8]. `sbt-native-image` plugin has the `nativeImageRunAgent` command, which starts tracing. However, in my experience, it does not work well. I am not sure if this is a Scala-related issue or if tracing is generally problematic. - SO, tracing is not always the perfect solution, repository fetching is not working at all, and as a result, you will mostly need to write and manage the `.json` metadata files manually when using GraalVM with Scala. In summary, adding GraalVM to your Scala application can be quite challenging as for now (I hope in future it will be much easier). However, the benefits are so significant that still it’s worth the effort. <!-- prettier-ignore-start --> [1]: https://www.graalvm.org/ [2]: https://www.graalvm.org/22.1/reference-manual/native-image/Limitations/ [3]: https://www.graalvm.org/latest/reference-manual/native-image/overview/BuildOptions/ [4]: https://spring.io/ [5]: https://quarkus.io/ [6]: https://www.graalvm.org/latest/reference-manual/native-image/dynamic-features/ [7]: https://github.com/oracle/graalvm-reachability-metadata/ [8]: https://www.graalvm.org/latest/reference-manual/native-image/metadata/AutomaticMetadataCollection/ <!-- prettier-ignore-end --> --- # Article: [Yet another way to establish Notion + Jekyll synchronization](/2023/08/26/yet-another-way-to-establish-notion-jekyll-synchronization/) Jekyll is a wonderful tool for building static websites. It has its' own pros and cons. One of the drawbacks of many website generators is the lack of a CMS. I would love to be able to write posts anywhere, but out-of-the-box, the only way to write something is via git. This way restricts you heavily: - It is quite difficult to set up a comfortable workspace on mobile devices, so the only way to write is to use a desktop. - Even on a desktop, you need to configure it enough to be able to write posts. I have several desktops, and not every machine is dev-configured. You have to share SSH keys, install Git, install your favorite editor, and so on. <!--more--> ## Existing “CMS” solutions There are some popular solutions, such as [jekyll-admin][5], [prose.io][6], and [others][1]. Honestly, I haven’t even tested them on a real website because all of them lack ability to provide comfortable editing on a mobile. They usually provide a web UI, and you have to login there to edit your content. I’m not sure about other features, such as seamless synchronization, drafts, and configuration. Maybe they are okay, but still, in-browser editing on a mobile is a fatal drawback for me. ## Notion [Notion][7] comes to the rescue. Just imagine being able to manage your Jekyll-powered website with Notion: - First-class mobile application and web UI. - It's free. - Everything is synchronized out-of-the-box, so you can start writing on mobile and immediately switch to a desktop to continue. - And many other features. You probably already know what Notion can do 🌚. Initially, it may sound difficult to integrate Notion with Jekyll, but it's actually not (at least after reading this article). There are many tutorials about "how to configure Notion + Jekyll sync", and there are many ways to do it. ## So, Notion + Jekyll I have come across various approaches on how to do it, and most of them follow a similar pattern: pack the logic (connect to Notion API, access the database, parse content into markdown, etc) into a Docker container, run it using a cron job, and git-commit the output. In general, I like this approach, but I don't like the need to manage a large amount of logic, especially if it's written in a language other than Ruby. Ideally, I want everything to be bundled as a Jekyll plugin with simple configuration via `_config.yml`. Luckily, I found a solution that almost meets my requirements: [emoriarty/jekyll-notion][2]. The only thing that it does "wrong" for me is that it doesn't store all the content under git control; it syncs only during the build stage every time. Well, this approach mostly works okay, but there are some drawbacks: - It depends on Notion API availability and internet connectivity during the build. - It ties your website too much to Notion. If you decide to move to some other CMS, you will need to migrate all your content manually. - If you accidentally remove your Notion database, you will lose all your content. - If your website's source code is open-sourced, then it will be incomplete: all the Notion content will be missing. ### jekyll-fetch-notion That's why [seroperson/jekyll-fetch-notion][3] was born ([as a result of a pull-request][4]). It's a fork of `jekyll-notion`, aimed at synchronizing things separately from the build phase. It introduces a new command, `jekyll fetch_notion`, which pulls and converts your Notion content according to `_config.yml` and places it in the appropriate source directory. All you have to do after that is `git commit` and `git push` to trigger the build. This plugin still lacks some features, such as custom page fetching and plain data fetching, but it's a good start nonetheless. <%= render "alert", type: "warning", message: "Since September 2024 [Notion is no longer works for Russia users](https://www.notion.so/help/restrictions-for-customers-based-in-russia). I have archived the plugin and don't support it anymore. However, you can still use it; I think it will continue to work without any changes for a long time. Described approach allowed me to avoid losing all my posts when Notion have closed my account." %> ## My final setup So, to configure sync, follow these steps: - Create a Notion database by following this [guide](https://www.notion.so/help/intro-to-databases). If you're unsure what a database is, read this [article](https://www.notion.so/help/what-is-a-database). - Create a new connection by going to [My Integrations](https://www.notion.so/my-integrations). Then copy the given secret. - Assign the newly created connection with the database. - Go to your website repository and edit the following files: - `Gemfile`: ```ruby # ... group :jekyll_plugins do # ... # github.com/seroperson/jekyll-fetch-notion gem 'jekyll-fetch-notion' end ``` - `_config.yml`: ```yaml # _config.yml # ... # set default layout for your posts # make sure it set because otherwise your notion-powered # posts will look weird defaults: - scope: path: "" type: posts values: layout: post sitemap: true hidden: false # ... notion: # disables the sync at build stage # makes `jekyll prefetch` command available fetch_mode: true databases: # your database id # https://www.notion.so/{workspace_name}/{database_id}?v={view_id} - id: 123abc # filter based on your database properties' # replace with your own filter: { "property": "Status", "select": { "equals": "published" } } ``` - (_optionally_) Configure an additional CI job in `.gitlab-ci.yml` (or follow the guide of how to do it according to CI you use), so you can trigger synchronization just by pressing a button: ```yaml # $BOT_NAME $BOT_EMAIL $TOKEN_NAME $ACCESS_TOKEN $NOTION_TOKEN # variables must be defined # ... pages: stage: deploy # ... notion-sync: stage: deploy rules: # run this job just only if it was scheduled or triggered manually - if: ($CI_PIPELINE_SOURCE == "manual" || $CI_PIPELINE_SOURCE == "schedule") && $NOTION_TOKEN != null && $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH script: - |- # run our synchronization command bundle exec jekyll fetch_notion # if there are any changes from notion ... # note: be sure that all files produced by # (i.e. bundle install) are git-ignored if [[ $(git status --porcelain) ]]; then git config user.name $BOT_NAME git config user.email $BOT_EMAIL git add . git commit -m "Notion sync $(date +"%Y-%m-%d")" git remote add ci "https://$TOKEN_NAME:$ACCESS_TOKEN@gitlab.com/<username>/<your-repo>.git" git push ci HEAD:$CI_COMMIT_REF_NAME fi ``` - (_optionally_) Schedule a build via GitLab UI Make sure all necessary environment variables are set. <!-- prettier-ignore-start --> [1]: https://github.com/planetjekyll/awesome-jekyll-editors [2]: https://github.com/emoriarty/jekyll-notion [3]: https://github.com/seroperson/jekyll-fetch-notion [4]: https://github.com/emoriarty/jekyll-notion/pull/68 [5]: https://github.com/jekyll/jekyll-admin [6]: https://github.com/prose/prose [7]: https://notion.so/ <!-- prettier-ignore-end --> --- # Article: [For Honor Season 10 stats](/2019/08/01/for-honor-season-10-stats/) This post describes my attempt to collect and visualize statistical data of recent season in [For Honor][1]. There will be many things related directly to the game and also some technical details which maybe not so interesting for the players. <!--more--> <%= render "for-honor/graph" %> For those who are not familiar with the game at all: it is a fighting online game by [Ubisoft][2] with uncommon mechanics, tons of characters, game modes and so on. I have been playing it for a long time and I can say that in general this game is pretty nice but there are some pitfalls. I am not going to analyse the reason of such results by leaving such opportunity to upcoming Ubisoft's "State Of Balance". Actually, the goals of this experiment are: - To provide one more source of season stats which are based on completely different than upcoming "state of balance" population (pc-only, ranked top-100). - To be able to compare official results with something else (i.e. that which we have now). - If it is possible, to induce Ubisoft to publish more detailed summaries by the endings of seasons. ## Technical details Ubisoft provides web UI for [analyzing your stats][3]. We are able to notice at least the following interesting things here: - Per-character statistics: kills, deaths, assists, wins, losses etc. - Current top-100 ranked leaderboard. First key thing is that web client is exposing backend API so we can make plain standalone requests. The second thing is that this API provides access to stats of any player (not only your's data) by his profile id (which can be also retrieved by username). Here we can suggest that such possibilities make us able to retrieve enough data to build and analyze our own "state of balance", but there are some pitfalls: - As for now, top-100 leaderboard is the only source of players. Seems like we can't get them automatically from anywhere else. - The data Ubisoft provides is not enough to see the full picture. Currently I can rely just on mentioned above API and discovering its' features by inspecting mentioned above web client. It is quite possible that there are much more features than I know. Perhaps there exist some more suitable API for such purposes (for example, API which desktop client uses), but we have no information about it. For example, at least we can't build [such graph][4] just because we have no information about player's opponents. But despite such inconveniences we still can try to benefit from it. By querying leaderboard at times to populate more players and fetching their stats since beginning of season, we can analyze quite a lot of interesting things. ## Analyzing pick-rates The following graphs are the pick-rate stats (X axis is the total games played by certain hero) of the all players that were (or still are) at any position in leaderboard (PC-only). Currently there are [about 300 players][5] (if you have changed nickname during the season, there will be your old one). <%= render "for-honor/graph_pick_rate" %> Also take a look at "average games per player" graphs. I have added it to denote situations where players are grinding so much, so they are impacting overall picture. These graphs were calculated via `$total_hero_games / $total_hero_players`, where `$total_hero_players` is the total amount of players which have played at least one game by this character. If there is too large number, quite possible that there are some players which play this character too much. For example, Valkyrie is top-4 in ranked pick-rate just because there is someone who played her 400+ games during the season. And note that there are some differences with Ubisoft's techniques of collecting data: - Official "State Of Balance" is merged data across all platforms vs my "PC-only". - Official "State Of Balance" based on much more players. We are talking about platinum+ split (top 4%) vs my "top-100" (possibly it is something like ~0.5% but I don't know exact numbers). - Also, it is not so clear - which duels does Ubisoft mean? Is it ranked? As we can see, here is noticeable difference with ranked/unranked duels. - I don't know how exactly Ubisoft calculates pick-rates. Pick-rates above were calculated via simple `$total_games / $total_hero_games`, but there is possible inaccuracy with such formula. Take case with top-4 Valkyrie as example. Quite possible, Ubisoft adjusts such situations somehow. In general, results are quite similar to [Ubisoft's results for previous season][7] (we do not consider Raider/Lawbringer/Sakura because they are "new" comparing to previous season), but anyway there are some curious things. Let's analyze the most inconsistent ranked duel results. As long as Ubisoft is talking about viability in their "State Of Balance", we assume they are presenting ranked duels data - just because simple duels are played more casually and here you usually are not thinking about viability. So: - Ubisoft's Shugoki has **6.6%** vs **1.0%** in my results - possibly just because Shugoki was "new" in previous season and that is why he was temporary viable. I guess, in general all players know how to play against him now and due to inability to perform something dangerous to them, Shugoki has so low pick-rate. - Ubisoft's Black Prior has **9.7%** vs **3.4%** - in my opinion such difference has partially the same reason as in case of Shugoki. But objectively Black Prior is more viable than shugoki despite the knowledge of his toolkit. That is why his pick-rate is not total garbage, but also not so high. I think he has 3% despite his viability due to his repetitive gameplay - just like Conqueror (Conqueror has **2.3%** according to my results). - Ubisoft's Kensei - **6.2%** vs **1.9%**, Gladiator - **5.0%** vs **1.3%**, Orochi - **7.7%** vs **3.9%**. I have several predictions about explaining such differences: - Possibly Ubisoft's data is about unranked duels (or mixed ranked/unranked). - Possibly there were players in previous season which are grinding these characters a lot (like someone who brought valkyrie in top-4 according to my data) and Ubisoft does not handle such cases in any way. - Possibly it is impact of: - Different populations (data for platinum+ merged across consoles vs data for top-100 PC-only); - Recent season changes (for example, it can be hard to play Gladiator or Orochi vs Raider or Sakura, so their pick-rate fall down); - Different methods of calculating pick-rates. However, waiting for the next "State Of Balance" to compare our data. ## Analyzing win-rates Next graphs are win-rates. I have splitted the data into two categories: - Average total win-rate - calculated by the following formula: `$total_hero_wins / $total_hero_games` - Average player win-rate: `(sum of ($player.total_hero_wins / $player.total_hero_games)) / $total_hero_players` Possible there are more suitable ways to calculate such things, but I left it as it is. As I said above, we can't build win/loss matrix due to inability to retrieve such data. We also can't distinguish duels between opponents which are not in the same MMR bracket (Ubisoft says they actually do that). So: <%= render "for-honor/graph_win_rate" %> It sometimes coincides with Ubisoft's results for the previous season, but mostly it is not, so i am going to leave it as it without comparison like in previous section. I think that possible reason of such differences is absence of filtering by the same MMR bracket. Also, just like in case of pick-rate, ranked and unranked duels vary significantly. Again, quite possible it is the consequence that ranked matchmaking enforces the same MMR bracket. If not, maybe it can be an occasion for Ubisoft to publish such results separately too. _Just a fun obvious fact from the graph above is that if you are a top-100 player and you play Raider/Sakura/Lawbringer, it is highly likely you will win 3/4 of your duels._ ## Conclusion All the data was presented for top-100 players, but I'm inclined to think that overall and platinum+ populations will be similar. However, we will check it soon. This experiment has taken quite a lot of my time to implement and I hope you like it. Glad to see any your thoughts in Reddit thread [here][8]. It is quite possible that I will continue to develop this project, so if you want to participate in, you can leave your in-game username and your platform in the same Reddit thread to make me able to count your activity in possible similar future reports. I have not developed a sane front-end yet to do this automatically, but possibly I will land it soon. <!-- prettier-ignore-start --> [1]: https://forhonorgame.com/ [2]: https://www.ubisoft.com/ [3]: https://game-forhonor.ubisoft.com/#/en-us/stats/player [4]: https://ubistatic19-a.akamaihd.net/resource/en-us/game/forhonor/fh-game/s9_duel_win_top_349404.jpg [5]: https://pastebin.com/BRbDPFFF [7]: https://ubistatic19-a.akamaihd.net/resource/en-us/game/forhonor/fh-game/s9_duel_full_349402.jpg [8]: https://www.reddit.com/r/forhonor/comments/cl41mt/carefully_collected_prestate_of_balance_season/ <!-- prettier-ignore-end --> --- # Article: [Vim for writing code and prose](/2017/04/15/vim-for-writing-code-and-prose/) I too often feel the need to write both 'prose' and 'code' stuff. My favorite editor is vim and I'm using it for any file editing purpose. Unfortunately, it can be very uncomfortable to use the same `vimrc` for writing prose and code. When you are writing prose some plugins (or settings) which intended to use while writing code can make your ass burn (and vice-versa). You can use `autogroup` to deal with it, but it's very inflexible and makes your `vimrc` look ugly. You can also use `filetype` plugins, but as long as I can see, it's inflexible too. <!--more--> The main idea is pretty simple - to split `vimrc` into three parts. The first part is the core, where I store the common settings. The second and third parts depend on first part and they contain appropriate settings for writing prose or code. Such splitting is very scalable and makes possible to manage the whole `vimrc` for such purpose instead of brainfucking with `autogroup` or `filetype` plugins. If you already interested in, you can find the sources at the end of note. Here is the very-generalized description of steps and tips to get a working solution: - Create three files: `vimrc.core`, `vimrc.code`, `vimrc.prose`. I just described these files above. - Fill them with appropriate configuration stuff, move the common settings to `vimrc.core` and make others `source` it. These things are very simple to implement and you can get in trouble just with per-mode plugins. It can be solved easily if you are using [dein][1]. - Setting up shell aliases sometimes can be handy. I have done it like that: ```sh alias vp="vim -u $XDG_CONFIG_HOME/vim/vimrc.prose" alias vc="vim -u $XDG_CONFIG_HOME/vim/vimrc.code" ``` - To not care about current 'mode' in the most of situations, my `vimrc` now contains some logic to determine it and load appropriate config (generally it's just sourcing `vimrc.prose` or `vimrc.code`). For now the logic just depends on file extension, but maybe it will be nice to make it use filetypes. However seems like it's not so easy as it looks. My variant looks so (yea, that's whole `vimrc`): ```viml let s:prose_types = ['md', 'txt'] if index(s:prose_types, expand('%:e')) != -1 source $XDG_CONFIG_HOME/vim/vimrc.prose else source $XDG_CONFIG_HOME/vim/vimrc.code endif ``` ## Conclusion That's it. Check the sources [here][2]. <%= render "alert", type: "warning", message: "As of December 2023 I migrated from Vim to [AstroNvim](https://astronvim.com/), so I have completely new `.vimrc` now and the link is not valid anymore. But you still can check early commits in my repository if you are interested in." %> <!-- prettier-ignore-start --> [1]: https://github.com/Shougo/dein.vim [2]: https://github.com/seroperson/dotfiles/tree/master/.config/vim <!-- prettier-ignore-end --> --- ## All Notes # Note: [Cursor и WSL](/ru/2025/04/22/running-cursor-with-wsl/) В этой заметке я расскажу как настроить [Cursor][1] для комфортной работы с WSL. <!--more--> ## Шаг 1: Подключаем Cursor к WSL Для начала нужно открыть проект, который находится внутри WSL. Поскольку Cursor основан на VS Code, можно просто [последовать официальному туториалу VS Code по настройке WSL][2] или, если коротко, выполнить следующие действия: - Найдите и установите расширение "WSL" во вкладке "Extensions". - В левом нижнем углу появится новая кнопка, которая выглядит как "><". - Нажмите на неё и выберите "Connect to WSL" или укажите конкретный дистрибутив через "Connect to WSL using Distro...". Теперь вы сможете открывать проекты из WSL. ## Шаг 2: Настраиваем конфигурацию MCP Если вы используете MCP, скорее всего, вы хотите запускать их в WSL, а не из `cmd.exe`. На момент версии `0.49.0` Cursor требует для этого дополнительной настройки. Обычно файл `mcp.json` выглядит примерно так: ```json { "mcpServers": { "context7": { "command": "npx", "args": ["-y", "@upstash/context7-mcp@latest"] } } } ``` Чтобы запускать MCP в WSL, измените конфигурацию следующим образом: ```json { "mcpServers": { "context7": { "command": "wsl", "args": ["bash", "-c", "'npx -y @upstash/context7-mcp@latest'"] } } } ``` ## Шаг 3: Настраиваем переменную $PATH в WSL Чтобы иметь возможность вызывать команды `cursor` и `cursor-tunnel.exe` из WSL, добавьте директорию `bin/` Cursor'а в переменную окружения `$PATH`: ```sh export PATH=$PATH:/mnt/c/Users/seroperson/AppData/Local/Programs/cursor/resources/app/bin ``` <!-- prettier-ignore-start --> [1]: https://www.cursor.com [2]: https://code.visualstudio.com/docs/remote/wsl-tutorial <!-- prettier-ignore-end --> --- # Note: [Cursor and WSL](/2025/04/22/running-cursor-with-wsl/) Here I want to share a short guide about setting up [Cursor][1] to work well with WSL. <!--more--> ## Step 1: Point Cursor to WSL First, you need to open your project which located inside WSL. Since Cursor is based on VS Code, you can just [follow the VS Code tutorial][2], or simply: - Search for the "WSL" extension in the "Extensions" tab and click "Install". - Look at the bottom left corner for a new button that looks like "><". - Click it and select "Connect to WSL" or "Connect to WSL using Distro...". After this, you can open projects from WSL. ## Step 2: Edit your MCPs configuration If you use MCP, you'll probably want to run it in WSL too. As of version `0.49.0`, Cursor needs an extra change for this. Usually your `mcp.json` looks like this: ```json { "mcpServers": { "context7": { "command": "npx", "args": ["-y", "@upstash/context7-mcp@latest"] } } } ``` To make it run in WSL you need to change it to something like this: ```json { "mcpServers": { "context7": { "command": "wsl", "args": ["bash", "-c", "'npx -y @upstash/context7-mcp@latest'"] } } } ``` ## Step 3: Edit your WSL $PATH variable To be able to run `cursor` and `cursor-tunnel.exe` from WSL, add Cursor's `bin/` folder to your `$PATH` like this: ```sh export PATH=$PATH:/mnt/c/Users/seroperson/AppData/Local/Programs/cursor/resources/app/bin ``` <!-- prettier-ignore-start --> [1]: https://www.cursor.com [2]: https://code.visualstudio.com/docs/remote/wsl-tutorial <!-- prettier-ignore-end --> --- # Note: [Making website Telegram Instant View compatible](/2025/01/03/making-website-telegram-instant-view-compatible/) [Telegram Instant View][1] is a feature that allows users to view web articles within Telegram without opening them in a browser. It reformats articles to fit the Telegram app's interface, making them easier to read on mobile devices. According to the documentation: > Instant View allows Telegram users to view articles from around the Web in a > consistent way, with zero loading time. When you get a link to an article via > Telegram, simply tap the Instant View button, and the page will open > instantly. However, as of now, it looks like this feature is unavailable for newcomers. It's quite sad, as I feel IV is greatly underrated, and should be revisited by Telegram Team to ensure its' availability to developers. But still, there is a workaround to make it work. <!--more--> ## What's the problem? Shortly, to implement true IV for your website (when you just post a link and an IV button appears), you have to: - Create a template using their [IV editor][2]. - Publish it and wait for Telegram Team approval. And maybe it would work (still, such method is questionable), but for many years there has been [some problem with review submission][3]. You can Google it yourself; this review system simply doesn't work. ## How to resolve it If not to consider weird links like `t.me/iv?url=...&rhash=...` as a decision, the only way to workaround template review is to adapt your site to existing templates. The very first place where I found this method is [petro_64's article][4]. Shortly, you have to use an undocumented `tg:site_verification` meta tag and make your website fit [a specific template][5]. Your structure should not be **exactly** the same, just key elements must be accessible by specific XPath queries (which IV template uses). For example, you may have more nesting levels for your headers or body or something else. ### Bridgetown / Jekyll Instant View compatibility As for static website generators, such method should not be a big problem, unless you have some really tricky markup. The only thing which can be difficult is a case when you need to change the markup produced by your `.md` generator. For example, my code snippets had markup `pre > code`, but the IV template recognizes snippets just as `pre` and inline monospace font as `code`, so they had conflicts. With Bridgetown, you can fix this using [HTML Inspectors][6] (`plugins/iv-snippets.rb`). ```ruby class Builders::PostConclusion < SiteBuilder def build inspect_html do |document, resource| document.query_selector_all("pre.highlight > code").each do |element| element.parent.add_child(element.inner_html) element.remove end end end ``` <!-- prettier-ignore-start --> [1]: https://instantview.telegram.org [2]: https://instantview.telegram.org/#instant-view-editor [3]: https://bugs.telegram.org/c/21634 [4]: https://habr.com/ru/articles/807129/ [5]: https://gist.github.com/fishchev/ed2ca15d5ffd9594d41498a4bf9ba12e [6]: https://www.bridgetownrb.com/docs/plugins/inspectors <!-- prettier-ignore-end --> --- # Note: [Publishing jar artifact via the Central Portal](/2024/11/21/publishing-to-maven-central-portal/) Publishing to [Maven Central][1] is now available via Central Portal (actually, it has been here for a long time). No more registering using Jira ticket, now it looks more mature with sane Web UI, new API, webhooks, ability to confirm your namespace and more. Web UI and API allow you to upload your bundles manually if your build system doesn't support it yet. Not so many tools support this at the moment, but the number is growing. For example, [Gradle][2], [Maven][3], [sbt][4], [mill][7] are already among these. <!--more--> <%= render "alert", type: "info", message: "As for 2025, [mill](https://mill-build.org/mill/contrib/sonatypecentral.html) and [sbt](https://www.scala-sbt.org/release/docs/Using-Sonatype.html) now support publishing to Sonatype Central! 🎉" %> If your build tool doesn't support it and you don't have time to develop a plugin yourself, as mentioned above, you can [upload your bundle manually][5]. This is quite easy if your build tool is already capable of producing Maven artifacts. Assuming that you have the following output after publishing to your local Maven repository (for example, `mill __.publishM2Local` or `sbt publishM2` are producing such output): ```text me `-- seroperson `-- urlopt4s_2.13 `-- 0.2.0 |-- urlopt4s_2.13-0.2.0-javadoc.jar |-- urlopt4s_2.13-0.2.0-sources.jar |-- urlopt4s_2.13-0.2.0.jar |-- urlopt4s_2.13-0.2.0.pom 4 directories, 4 files ``` All you have to do to get the resulting bundle is generate `md5`, `sha1`, `asc` files. Actually, sometimes your output can contain necessary hashes - it depends on the build system. But if not, you can generate these hashes and sign files like this (requires pre-configured `gpg`): ```sh find . \( -name "*.jar" -or -name "*.pom" \) -exec sh -c "md5sum {} | cut -d ' ' -f 1 > {}.md5; sha1sum {} | cut -d ' ' -f 1 > {}.sha1; gpg --armor --detach-sign {}" \; ``` And now compress everything into `.zip` archive (for example, using [ouch][6]): ``` ouch compress me/ urlopt4s_2.13.zip ``` Voilà! Bundle is ready, now you can [manually upload it to the Central Portal][8]. <!-- prettier-ignore-start --> [1]: https://central.sonatype.com [2]: https://central.sonatype.org/publish/publish-portal-gradle/ [3]: https://central.sonatype.org/publish/publish-portal-maven/ [4]: https://github.com/xerial/sbt-sonatype [5]: https://central.sonatype.org/publish-ea/publish-ea-guide/ [6]: https://github.com/ouch-org/ouch [7]: https://mill-build.org/mill/contrib/sonatypecentral.html [8]: https://central.sonatype.org/publish-ea/publish-ea-guide/#publishing-your-components <!-- prettier-ignore-end --> --- # Note: [Mastering Windows peformance](/2022/11/28/mastering-windows-performance/) Right now one of my working stations is an average laptop which is not performant so much to run some heavy graphical applications (i.e. some competitive games where fps/latency matters). I have a dual boot with Linux and Windows and while Linux works smoothly, Windows is not. In case if you have good hardware, Windows runs pretty good out-of-box without any tweaking, but if you don't, it is good thing to tune it manually by disabling redundant services, uninstalling bloatware, drivers, tuning network and so on. Actually, it is nice to do even with top-hardware to achieve maximum performance. <!--more--> There are tons of shitty manuals at top-search positions and it can be hard to find something really handy. So, posting here the most useful stuff which can be used as starting point. To achieve the highest results it would be nice to reinstall Windows from scratch. If you are using Windows a lot, think about making a multi-boot with "high-performance" and "regular" installations because the first one must be as lightweight as possible. - [djdallmann/GamingPCSetup][1] - step-by-step collection of "how to tune your Windows" articles. - [BoringBoredom/PC-Optimization-Hub][3] - in case if you need to investigate the topic even more, another collection about performance tuning. - [amitxv/EVA][7] - contains quite a lot of uniq tweaks and candy scripts. - [Calypto's Latency Guide][2] - further advanced OS-level optimizations. - [Latency & Gaming][4] discord channel - performance tweakers' community. - [Atlas-OS/Atlas][8] - an open-source Windows build, designed to optimize performance and latency. I have not tried it yet but looks like good option in case if you are too lazy to do everything manually. - [ChrisTitusTech/winutil][9] - does many performance tweaks automatically, in case if you are too lazy to do them manually. <!-- prettier-ignore-start --> [1]: https://github.com/djdallmann/GamingPCSetup [2]: https://docs.google.com/document/d/1c2-lUJq74wuYK1WrA_bIvgb89dUN0sj8-hO3vqmrau4/edit [3]: https://github.com/BoringBoredom/PC-Optimization-Hub/ [4]: https://discord.gg/452HBfSS4n [5]: https://github.com/mbrt/gmailctl [6]: https://github.com/antifuchs/gmail-britta [7]: https://github.com/amitxv/EVA [8]: https://github.com/Atlas-OS/Atlas [9]: https://github.com/ChrisTitusTech/winutil <!-- prettier-ignore-end --> --- # Note: [vim system-wide clipboard](/2022/11/09/vim-system-wide-clipboard/) I don't know why I haven't used it yet but [it is possible][1] to make vim's `yy` (and similar commands) works with system clipboard and across different vim instances: ```viml " enable copying from vim to the system-clipboard set clipboard=unnamedplus ``` <!-- prettier-ignore-start --> [1]: https://stackoverflow.com/a/9167027 <!-- prettier-ignore-end --> --- # Note: [tmux git-root window name](/2022/11/09/tmux-git-root-window-name/) My common workflow with tmux is using several per-project windows and navigating between them. Each "project" is usually a git-repo and I do all the things just inside this repo. Until recently I had to manually name each window to not fall into the mess of unnamed windows. <!--more--> I have found that it's possible to automatically set window's name according to current directory: [link][1]. Even more, it is possible to use custom shell command to set it. So, to make your window named as your current pane git repository, you can use the following commands in your `tmux.conf`: ```sh # sets window name to basename of git-root directory set -g automatic-rename on set -g automatic-rename-format '#(basename "$(git -C #{pane_current_path} rev-parse --show-toplevel 2>/dev/null || echo "#{pane_current_path}")")' ``` With most recent tmux version (tested with `3.3a` on Linux and OSX) it should run okay without any weird issues. <!-- prettier-ignore-start --> [1]: https://stackoverflow.com/a/45010147 <!-- prettier-ignore-end --> --- # Note: [vim error quit](/ru/2022/11/05/vim-error-quit/) Иногда Вам необходимо завершить редактирование файла с ошибкой. Это случается не так часто и, на самом деле, единственная ситуация, в которой мне это обычно нужно, это отмена редактирования git коммита на этапе "Введите описание коммита". Обычно я просто убивал процесс с редактором, но в какой-то момент мне это вот надоело и я обнаружил, что в vim'е есть команда `:cq` ([link][1]), которая закрывает редактор с "ошибкой" и коммит, собственно, отменяется. <!-- prettier-ignore-start --> [1]: https://stackoverflow.com/a/4323790 <!-- prettier-ignore-end --> --- # Note: [vim error quit](/2022/11/05/vim-error-quit/) Sometimes you need to quit editor with error code. It happens not so often and the most notable situation is when you need to somehow abort git-committing at "write your message" stage. Here you can manually kill your editor's instance or, in case of using vim, enter `:cq` ([link][1]). Finally I don't have to kill my vim instance manually. <!-- prettier-ignore-start --> [1]: https://stackoverflow.com/a/4323790 <!-- prettier-ignore-end --> --- ## License All content is © 2025 seroperson ## Last Updated Generated: 2025-12-14 21:50:02 UTC