The quest for the most efficient method to host simple websites, such as a local hairdresser’s page, is a topic I often ponder. These websites require infrequent updates, maybe just a change in operating hours or the addition of new photographs. Despite this, many developers opt for comprehensive Content Management Systems (CMS) like WordPress or Drupal. Such platforms offer extensive features, including user and content management, which are largely underutilized for sites with sporadic content updates. Moreover, these solutions necessitate a continuously running database, awaiting visitors. This approach, in my view, represents an inefficient use of resources, prompting me to advocate for the adoption of a headless CMS.
In this post, I’ll detail my chosen architecture implementing a headless CMS solution, diverging from traditional practices. Unlike standard CMS platforms that provide RESTful APIs, my approach involves generating static JSON files. This method significantly simplifies content delivery, reducing the demand on server resources and enhancing site performance.
This headless CMS has a focus on small businesses or projects, but not web shops or advanced websites because of the following limitations:
This DNS component will provide a hosted zone with all the necessary records and redirecting. The domain name for the website and CMS are registered here, for about 9€/year. The cost of a hosted zone is about 6€/year.
CloudFront acts like a CDN (Content Delivery Network), but I mainly use it for the HTTPS offloading with a certificate managed in the AWS Certificate Manager. This component is configured to redirect HTTPS to the HTTP-address of the S3 bucket.
I’ve activated the static website hosting feature of the S3 bucket, so it will serve all the necessary files including HTML, CSS, JavaScript, images and even data in the form of JSON-files. Everything on this bucket can be publicly read and only the CMS has the authorization to update or delete. The monthly cost of this hosting is about 2 cent per month with a normal load. For scenarios anticipating higher traffic, we can activate the caching features on CloudFront.
If I want to handle some website generated events or the submitting of a form, I’ll deploy a specific AWS Lambda’s to handle the task. The cost of this depends on the number of invocations and the action, so it is also very scalable.
The most significant expense in this setup arises from maintaining the CMS operational. To achieve this, I’ve deployed an EC2 instance, which incurs an annual cost of around 100€. I use an application load balancer for HTTPS offloading and for directing requests to the appropriate instance where the CMS is hosted.
Access to the CMS is safeguarded by integration with an OpenID provider, such as Google or Microsoft, ensuring that authentication is both secure and user-friendly. Permissions are dynamically assigned based on the user’s email address, granting appropriate rights to modify website content, including images and data. Update a text for example will update a JSON file on the designated S3 bucket, ensuring the client website shows immediately the latest content.
Adopting a headless CMS for simple websites not only aligns with the principles of efficiency and simplicity but also opens new possibilities for web development. By focusing on what truly matters - delivering content in the most direct and uncomplicated manner - we can create faster, more reliable, and cheaper websites. Stay tuned for further updates as I delve deeper into this venture, sharing insights and learnings along the way.
]]>The author begins with the idea of prosthetics, something that is unique to the human species. We use clothing to keep us warm or cool, shoes to protect our feet, glasses to improve our sight. There are even internal prosthetics (implants) like tooth fillings, pacemakers that significantly improve our well-being. You could even say that prosthetics differentiate us from animals. So why don’t we have a prosthetic that can improves our brains?
The solution to this problem, according to Forte, is note-taking: “Take notes everywhere, all the time, and organize them digitally”. Whenever you hear something interesting, learn something new, have a random thought or meet someone interesting: take a fast note. Afterwards you can organize your notes in a structured way, using a tool like OneNote, Google Keep, Evernote, or similar. By doing this consistently, you will have a custom knowledge base available to you at all the time, helping you find information quickly.
I’ve always loved the idea of having a solid knowledge base. My first attempt was to run my own local Wikipedia server on my NAS. It was easy to add, search, and manage it, but it has some downsides: no mobile app, no easy export to other formats, written in PHP, the cost of running the server.
My current approach is to use a Git repository for all my notes. Each note is written as a Markdown-file, which is essentially a text file with some formatting syntax. For example, this entire website is written using only Markdown-files on GitHub Pages. The big benefit of this approach, is that I can easily search/modify/add these text files and organize them into folders. Moreover, this technology will never become outdated or lost in the future, making it a future-proof option. I check my files into a major Git provider like GitHub, which comes with its own apps.
Keeping track of when, where, and what I bought, helped me a lot in the past. It took only one minute to find if something was still in warranty, or what brand something is in case I want to purchase again. I don’t add too much details, since I can still refer back to the original invoices if needed.
My notes also contain details of people I have met before. This is really useful for people who you don’t see often and tend to forget the last thing they have mentioned to you.
I even started to add my TODO-lists or shopping lists into Git.
Whenever I come across an interesting topic or suggestion, I make sure to add it to my list of information to process. A few times each week, I set aside some time to review my notes and decide how to proceed. I’ll keep anything that I found particularly useful, organize it or create tasks for myself if more work is needed. By consistently processing and organizing the information I come across, I’m able to build a comprehensive knowledge base that helps me in all aspects of my life.
]]>The book should be designed for beginners and assumes no prior knowledge of programming. I however find it quite advanced and dry, so I would not recommend you start reading it without prior programming experience. It covers the basics of Go, including syntax, data types, functions, and control structures, but also more advanced topics such as concurrency, networking, and error handling.
I kept with the basics and used Go to implement advent-of-code 2015. It seems that Go is especially useful for writing microservices and some fast basic scripts. I don’t know how it is to use the language in big software teams or how much support you can find.
For me personally it’s one of the best guides about being a software developer. First it is very motivational and teaches you the right mindset to tackle your work related problems. It also provides a lot of useful practices, that you can see as a toolbox: now it’s up to you to pick the right tools for the job. This book is still interesting to read for the experienced developers, because it’s another way of explaining some concepts, and you might always discover a hole in your knowledge. Some concepts were better explained in Clean Code: for example ‘orthogonality’: I prefer the explanation by Robert C. Martin: “Avoid side effects”.
I would love to hear your feedback as I’m eager to learn from other peoples experiences.
According to Agile Retrospectives you need to organise your retrospective into the following parts:
At the moment of writing, I have 5 years of experience as a Scrum Master or Team Lead. I have been working in corporations that had different approaches: text-book Scrum, Kanban, Waterfall but with some elements of Scrum,… During those times I witnessed or hosted many retrospectives, some good, and others awful with almost no participation of the team members. I even facilitated other team retrospectives, or asked external people to host one for my own team. These are my observations and learned lessons:
Most Scrum masters prefer a fixed interval of retrospectives: a meeting of 90 minutes after each sprint, so every two or three weeks. I have been in several situations where this fixed interval works counterproductive, especially when the team productivity is good:
My approach on retrospectives is to hold them at a fixed interval when you have a new team or a lot of new members. Otherwise, I prefer to hold the retrospectives adhoc: at least once a month, but also when there were big changes, some frustrations or after a big event like the first production release.
Setting the stage is crucial when you have introverts or new members in your team. Research has shown that people who don’t participate in the conversation in the first 10 minutes, are not likely to join afterwards. So it can be crucial to let everyone join in the beginning of the meeting, so the ice is broken.
Typically, my first point of the agenda will be to discuss the previously actions taken. I can’t stress enough of how important this is! It makes the discussed and agreed-upon action points useless, if there is no follow-up or evaluation. So what’s the point of the retrospective? How do you know your team is evolving to something better?
After discussing the previous action points, I start the retrospective with some kind of opening:
I try to avoid as much as possible of team members needing to vote on tickets, because there are some big downsides:
My favorite approach is to order the issues and actions in a certain order while we discuss them. I try to be fair about it: an empathic but impartial approach and estimate what issues have the biggest impact. For actions, the team and I discuss which ones are doable, important and we set those as our goal. It has no point to select 10 actions and have only 4 completed actions at the beginning of the next retrospective. Don’t worry to select only a limited amount of actions: if the issue or action is consistent, then it will appear in the next session.
I define here troubled teams as teams with bad performance, bad reputation or with conflicts within the team. In those cases the retrospectives can be crucial and can lead the team to a path of recovery, so I stick close to the theory of retrospectives and use fixed intervals. I will use tools like a happiness index, knowledge matrix,… to discover what the problems are. When there are interpersonal conflicts, I prefer to engage those with one-on-one meetings instead of a meeting with the entire team. If you are however part of the conflict or people actively undermine you, things will be harder: use the theory to build your retrospectives, let it be a foundation. Another tip I can give you: let someone external, from outside the team, lead the retrospective. That way you can fully participate as a member in the retrospective, instead of trying to be a neutral facilitator.
Draw a sailboat in the sea heading towards an island. The following is shown on the drawing or post-its:
The ‘bad’ are the critical problems, the ‘ugly’ are things that are not going great, but they are not problematic yet. I like to end the retrospective positively, so with the ‘good’: all the stuff that happened which made the team florish.
Here are some great websites and books full of great ideas:
Overview of the structure of the application.
The application consists of the following Java-modules:
This module contains the model and the most important business rules.
The packages board
, station
, statistic
, traindeparture
contain the model, mappers, events, associated exceptions and interfaces for each aggregate.
This may look a bit like Package by feature
, but it’s very handy to find all aggregate-related files together.
The services however are grouped in their own package called service
and their you can find the real application logic:
BoardHarvester
StatisticService
The big advantage of the organisation is that all the important code is bundled here in one beautiful package, so it truly is the heart of the application.
It has almost no dependencies, so it’s very easy to test and mock.
The only dependencies in the module are Spring (which can be moved to the application
-module if wanted) or support libraries like Guava
.
It also contains the interfaces for the other modules.
Take the example of StationRetriever
that will fetch all the station for a certain country.
The core
is not interested in how that exactly happens.
Is it being queried from a database?
Or is it a flat file that is being imported?
These are just details, that are not important for the business logic of the application.
The only concern of the core
is that the interface is being called at certain moments during the execution of some services.
The details of how the models are being persisted or retrieved are grouped in this module. In this application the choice was to use DynamoDB of AWS, a non-relational document store. DynamoDB has some nice features like it is cheap and really performant to query on a given range key, so perfect to gather statistics. If however another database is chosen, the only changes will need to happen in this module.
Sometimes there is not enough value to make a separate persistence module, like when your database model and core model are almost identical. In this case with the DynamoDB, the data model needs to have a lot of annotations and has a lot of noise and implementation details on how to persist. This means however that we need to provide extra mappers to translate the core model to a database model. Taking everything in account, it seemed the best choice.
This module will implement all the details on how the train information will be fetched from external services.
There are two ways to do this: using the webservice of NMBS
or the one of iRail
and by coincidence both are REST-webservices.
To fetch the data safely, it’s needed to take in account the possible time-outs or other errors that the webservices may throw.
Luckily there are some awesome libraries like Hysterix
with circuit-breakers that will help out a lot.
The only thing needed is the dependency, which is of course only interesting for this module, not the other ones.
The interactions with the applications will happen with REST-calls coming from a frontend, so this module will be like the gate to the outside.
Since all the endpoints that the application offer are read only, a good suiting name would also be Projector
: just exposing data.
In one of the earlier versions of this application, this module was a Vaadin-module: so it would collect the data and immediatly provide an user interface for it. The decision was made to abandon Vaadin for a more common Java backend - Javascript (Angular/Vue/React) frontend. Again the entire frontend could change without impacting any other modules.
The final module is the glue that will take the loose modules together and combine them to one application. So basically it only has the Main-method with in this case the SpringBoot-Application annotation and the system properties. The big advantage of this module is that the compilation will lead to one jar-file for the entire application. It is also the perfect place for the integration tests that will use the entire application.
Hexagonal architecture works also perfectly with events: an event will launch from one module and put the message on an eventbus. Every module or application that can connect with the eventbus, can consume those events. This way of working guarantees again that the modules/applications being made are loosely coupled from each other. There are no tight dependencies, they just need to respect the format of the event.
An eventbus can be in memory like the one of Guava
: this will typically be used for communication between modules in one application.
It works like a broadcast system: the message being sent will be interpreted by zero, one or many modules that are interested.
The receiver will asynchronously deal with the message and the sender has no idea or need to know this.
In the hexagonal architecture it will typically be the core
that will send the events.
The other modules can choose to interact with it or not.
Eventbuses are however most popular as a separate application like RabbitMQ
, ActiveMQ
, Kafka
,…
They work perfectly to communicate between different applications or even the same application that is deployed on several servers.
The Hexagonal Architecture is a very powerful tool to design your applications. It has a lot of support in the community like on Baeldung or in literature like Clean Architecture. I personally enjoyed it a lot, and keep on using it when a project gets complex enough. I hope you enjoy it too!
]]>JavaScript is one of the most popular programming languages according to StackOverflow and its possibilities are only growing. You might think of JavaScript as just running inside a browser to support websites, but with Node.js it also became a popular language for backend systems and automation tools. This article will show you some major improvements and features that might even be considered to implement in Java.
Java has now made the switch to a release train that publishes every 6 months a new major release, while still trying to maintain backwards compatibility. It’s a difficult process that leads to a lot of deprecation in the API and a difficult maintainable code base.
JavaScript on the other hand has a specification of the language which is being updated every year, called ECMAScript (or ES), where ES6 (specification of 2015) is now the industry standard. It introduces some nice features you will find further down in the article. Now the interesting part is that you can use the newest features of JavaScript that will even run on old browsers like Internet Explorer. There are projects like Babel that will compile your JavaScript code to plain old JavaScript code. This is something that should be possible in Java with the target-option:
javac -source 11 -target 1.8 App.java
It however fails because of bytecode changes in the JVM’s. For example: writing Java 8 code for JVM 7 doesn’t work, like writing Java 11 code for JVM 10,…
Java is a static-typed language, so the compiler decides the type of your variables. In a dynamically-typed language as JavaScript however the interpreter assigns variables a type at runtime. This means that code like here below is allowed, but this is the definition of bad programming:
So in general it is better to use static-typing and in JavaScript this is possible with an extension called TypeScript. Dynamically typing however offers a lot of flexibility and power, certainly with Objects in mind. Imagine you have a class with a teacher and a variable number of students. In Java we need to represent it as a collection, so an Object inside our Object.
In JavaScript your Objects can be dynamically defined, basically like a Map. This works amazingly well with document storage databases like Mongo:
One of the biggest advantages of JavaScript is the easy way to do asynchronous calls or concurrent calls. Where you would use Future, ExecutorService and Runnables, Javascript uses only Promises. A Promise is either still running or is resolved and has a result or exception. A small demonstration of the possibilities:
JavaScript offers some extremely handy operators that makes writing code much easier and avoids boilerplate code.
Instead of always doing null-checks or size checks, JavaScript allows the possibility to just put an Object inside a conditional, where in Java we only accept Booleans
When you want to write a String with some variable parameters inside you will have to use concatenation. JavaScript introduced the template literals what works with dynamic placeholders between brackets and backticks instead of standard quotes like this:
The spread operator allows an iterable to expand in places where 0+ arguments are expected.
In a lot of languages the code of just the variable assignments is typically very long: mapping attributes from one object to another. JavaScript however introduced destructuring-notation that will reduce the amount of code and logic.
JavaScript is no longer a language that should be mocked, but instead one that can be admired for the way how they make the life of developers easy. Some concepts are definitely worth looking into as Java continues evolving.
]]>-XX:+UseContainerSupport
is now default (5)Exception in thread "main" java.lang.NoClassDefFoundError: javax/xml/bind/JAXBException
. Just include: