Have you ever wondered why some brands stick in your mind while others disappear as soon as you leave? The answer lies in storytelling.
Storytelling in marketing is the art of communicating a message through copy that creates emotions, builds trust, and connects with the target audience.
Especially now, when any brand is a few clicks away and consumers have numerous options, storytelling is crucial in making a brand stand out.
In this blog post, we're going to look at the most important parts of storytelling for your brand building. Let's get started!
Emotions, not facts, connect an event or thought to a memory. I'm sure you remember that one event that happened in your past. I know you know exactly how you felt. Perhaps you were excited or sad. No matter the feeling, you remember this event, because you connected an emotion to it. People don't remember facts; they remember stories and the emotions it invoked.
By telling a story that hits your target audience right in the soul, you can evoke emotions that make your brand more relatable. When people feel connected to your brand, they are more likely to engage with it and even share their experiences with others.
Hitting the target audience in the soul will turn off their brain and will make them use their heart instead. This is also why every business book will tell you to find a highly specific target audience. It's much better to engage a tiny group of very motivated people than to reach a huge group of people who will never get an emotional response to your storytelling. Narrow it down, and find your niche.
When you've caught your audience's attention, when you've said that one line that hits them, they can start to see your brand as more than just a product or service, they are more likely to trust you.
Storytelling is more than a sales pitch, it can help you showcase your brand's values, mission, and culture. This helps to make it more human and relatable. Think about it, are you more likely to listen to a random person telling a story or will you listen to someone you know? Who will you trust more? I bet it's the person you already know and trust.
Have you ever heard of that brand that made incredible products and didn't have to put any effort into marketing these products? Neither have I. It's not enough to put products or services out there. You need to set yourself apart from competitors.
Storytelling can help you differentiate yourself by bringing your brand to life and making it stand out amidst the crowded marketplace.
Take the example of Starbucks, a coffee brand that has made a name for itself with its storytelling. Through its packaging, store design, and messaging, Starbucks has created a narrative that is uniquely theirs and creates an emotional connection with its customers.
You've seen 3 great reasons to use storytelling as a marketing tool, but there is one more. This last reason is a little more practical and by far the most difficult thing to accomplish.
Storytelling helps create content that engages your audience and keeps them hooked. Instead of pushing sales, storytelling shifts the focus to what your brand can do for your customers. You can use different formats like blog posts, videos, podcasts, and social media posts to tell your brand's story and make it fun, educational, and entertaining.
If you're targeting an audience that's "allergic" to sales pitches, like business owners in a time crunch, or software developers (talking from experience), a pushy sales pitch is going to land on death ears.
These audiences need a more indirect method. Storytelling is one of the ways to reach these types of people. Once you reach these people, you'll have super fans. Just look at professionals who swear by using Apple products over anything else. They've been sold a story and a feeling, not a product.
Storytelling in marketing might be a buzzword by now, but if done properly is a powerful tool that can help build emotional bonds, create lasting impressions, and differentiate you from the competition. By integrating storytelling into your marketing strategy, you can establish trust and loyalty, create engaging content, and ultimately drive conversions.
Learn how effective storytelling can connect emotionally with your target audience, build trust and loyalty, set your brand apart from the competition, and create engaging content that hits your audience right in their soul. Elevate your marketing strategy with the art of storytelling.
]]>In the world of software development, databases are the heart of most applications. These essential tools serve as the backbone of every web application, storing and retrieving data that powers the dynamic content we interact with. When starting or expanding a software project, one of the most important decisions you'll make is choosing the right database type.
In this blog post, we will explore the strengths and use cases of three popular database systems: MySQL, MongoDB, and Neo4j. By understanding the advantages and intricacies of each, you can make an informed decision tailored to your project's unique needs.
Before we look at the different database types, let's take a moment to think about the role that databases play in software development. A database is the repository where data is stored, managed, and retrieved by your application. Whether it's a blog, an e-commerce platform, or a social media network, databases are at the heart of all software projects.
A well-chosen database system can benefit performance, scalability, and data integrity. It ensures that your application can efficiently store and retrieve the information it needs to operate seamlessly.
If you've always used a single database type, you might be wondering why you should even think about this. Every database stores data that you can use. Why does the database type matter? Well, let's look at 3 different databases a little more closely to find out.
MySQL, a popular open-source relational database management system, is a dependable choice for many software applications. It excels in structuring data into tables with predefined schemas, making it ideal for projects that require structured, predictable data.
In conclusion, MySQL offers a reliable, structured way to store data and is particularly effective when dealing with predictable data and well-defined relationships. Its adherence to ACID principles guarantees data integrity and consistency. Now, let's delve into the realm of NoSQL databases, starting with MongoDB, and explore how it contrasts with relational databases.
MongoDB, a leading NoSQL database, takes a different approach. It's a document-oriented database that stores data in flexible, schema-less documents, making it an excellent choice for projects that require dynamic data structures.
In essence, MongoDB's flexibility and scalability make it a strong contender for projects with evolving data requirements. However, bear in mind that every project is unique, and the "best" database depends on your specific needs and the nature of your data. Now that we've covered relational and document-oriented databases, let's move on to exploring the world of graph databases, starting with Neo4j.
Neo4j is a graph database designed to handle complex relationships and highly interconnected data. It's an excellent choice for projects that need to navigate intricate networks or analyze graph-like data.
In conclusion, Neo4j is fantastic at handling complex relationships and interconnected data, making it a game-changer for projects that deal with extensive data interlinking and analysis. It's the go-to choice for uncovering insights from relationships and navigating through previously unknown data efficiently.
In conclusion, the choice of a database system is an important aspect of software development. MySQL, MongoDB, and Neo4j each offer specific advantages and use cases. By carefully evaluating your project's needs, you can make sure your data is stored and managed optimally, so you can quickly and easily develop your apps.
]]>Cloudflare has been a blessing and a curse when it comes to taking care of SSL certificates for your websites and web applications. Cloudflare does most of the heavy lifting and makes sure you're protected from attacks. However, it's also a curse. Let me explain!
Cloudflare serves as a proxy between the internet and your webserver. This is great because they can filter suspicious behavior and protect your web server from attacks. The attacker will never be able to determine the IP address of your server, because this is hidden by Cloudflare.
However, this is also a problem. When generating SSL certificates, the provider needs to be able to verify that the requesting server is actually who they say they are. Certbot for example, will need an external server to reach your server to verify that the domain name points to the requesting server. Only then if it can verify this, will it generate an SSL certificate for that domain.
The problem is that you still have Cloudflare in the middle of this interaction. The request from the external server will never reach your web server, so it can't verify the domain name points to your server. You can circumvent this by temporarily turning off the Cloudflare proxy and performing this check, but now you're not protected. And what happens when it's time to renew your certificate? Will you temporarily disable the Cloudflare proxy every time? I don't think so.
So I hear you asking: "Even if you use Caddy to automatically generate these SSL certificates, you'll run into the same problems". And you would be right. However, Caddy has a very nice plugin you can install that interacts with the Cloudflare API to solve DNS challenges for LetsEncrypt.
The benefits of these are fantastic because you get:
If Caddy itself didn't make you excited about web servers, the thought of zero maintenance and no messing around with Cloudflare will. Let's look at how we can integrate this.
The docker part of this integration is optional, but I prefer to run all of my applications in containers, including my Caddy installation. So you can follow that part or skip over it, your choice.
First, we'll need to use caddyserver/xcaddy to build a custom binary for Caddy. Caddy is written in Go, so we'll need to compile our binary to include the Cloudflare plugin.
You can do so by using this command:
$ xcaddy build \
--with github.com/caddy-dns/cloudflare
This command will create a binary that includes the base Caddy installation and add the Cloudflare DNS module. In a multi-stage docker image this will look like this:
FROM caddy:2.6.1-builder AS caddy-builder
RUN xcaddy build \
--with github.com/caddy-dns/cloudflare
FROM caddy:2.6.1-alpine
COPY --from=caddy-builder /usr/bin/caddy /usr/bin/caddy
The reason I'm putting this in a multi-stage docker image, rather than exposing it in 1 stage is the final image size. The caddy builder image is quite large, as it includes all the development software. You won't need any of this in production. All you'll need is the resulting Go binary at /usr/bin/caddy
.
One of the downsides is that building your custom Caddy binary isn't the quickest process. Luckily, you won't have to do this very often, so it's a great idea to publish your own Caddy base image (the Dockerfile above) and include that when you're building your final application.
As an example of this, I'll create a base image from the Dockerfile above and call this: roelofjanelsinga/caddy-cloudflare
I can now use this for hosting my web app like so:
FROM php:8.1-cli AS build-env
COPY --chown=www:www . /var/www/html
WORKDIR /var/www/html
# Perform some other build steps, like "npm run prod" and "composer install"
FROM roelofjanelsinga/caddy-cloudflare:latest
COPY --from=build-env /var/www/html /var/www/html
COPY ./docker/caddy/Caddyfile /etc/caddy/Caddyfile
Again, this is a multi-stage build to have the smallest final image as possible. In the second step, I'm using the custom Caddy base image, I'm copying my project files from the build step into the final image, and I'm including a custom Caddyfile with my domain configuration. This image can be served and will automatically take care of SSL.
In the custom Caddyfile, we'll need to add an entry to tell Caddy we want to use Cloudflare for our DNS challenges. Let me give you an example of my configuration first:
https://mydomain.com {
root * /var/www/html/public
encode zstd gzip
file_server
tls {
dns cloudflare {env.CF_API_TOKEN}
resolvers 1.1.1.1
}
}
Keep in mind, that this is a barebones, non-production configuration. It's just here to illustrate how to make this process work.
In the TLS configuration, we've noted that Cloudflare should be used for DNS challenges and you're seeing an environment variable for a Cloudflare API token.
Let's see how to get that token:
Now that you have the API token, the easiest way to use this is by including it in your docker-compose settings:
But you can supply this environment key in any way that works for you. Now, after you start your caddy container, you'll notice that it's instantly trying (and succeeding) to perform DNS challenges through Cloudflare and generate SSL certificates for your application.
You'll now have automatic SSL certificates as long as you use Caddy and you'll never have to mess with Cloudflare again. Pretty easy, right?
]]>Search Engine Optimization (SEO) is essential for every website that wants to rank higher on search engines like Google, Bing, and Yahoo. The better your website's SEO, the higher the chances of attracting more visitors, more leads, and more sales. However, many website owners make SEO mistakes that hurt their website's ranking, causing them to lose potential customers and revenue. In this blog post, I'll discuss the top 10 SEO mistakes that you should avoid to improve your website's ranking and attract more customers.
Keyword research is one of the most critical steps in SEO. It's essential to know what your target audience is searching for online and what keywords they use. By researching the right keywords and using them in your content, meta titles, descriptions, and tags, you can increase your website's relevance to search engines and attract more organic traffic.
Title tags and meta descriptions are crucial elements that help search engines understand what your website is all about. If they are not optimized, search engines may display inaccurate or irrelevant information in the search results, leading to lower click-through rates (CTR). Make sure your title tags and meta descriptions accurately describe your content and include your target keywords.
Search engines value original content, and duplicate content can hurt your website's ranking. Avoid copying content from other websites or using the same content across multiple pages on your website. Instead, create unique, high-quality content that adds value to your audience.
Mobile devices account for more than half of all internet traffic. If your website is not optimized for mobile devices, you're missing out on a significant number of potential visitors. Make sure your website is mobile-friendly, loads quickly, and is easy to navigate on small screens.
Page speed is a crucial ranking factor. Slow-loading pages can increase bounce rates, which negatively impacts your website's SEO. Optimize your website's images, reduce HTTP requests, minify CSS and JavaScript files, and use a content delivery network (CDN) to improve your page speed.
Internal linking helps search engines understand the structure of your website and the relevance of your content. It also keeps visitors on your website for longer, which improves engagement and reduces bounce rates. Use internal linking to connect related content and guide visitors to relevant pages on your website.
External links from high-authority websites can boost your website's authority and improve your ranking. However, be careful about the quality and relevance of the external links you use. Avoid spammy or low-quality links that can hurt your website's reputation.
Alt tags help search engines understand the content of images on your website. They also improve accessibility for visually impaired visitors. Make sure to use descriptive alt tags that accurately describe your images and include your target keywords.
Social media can be an excellent source of traffic and backlinks for your website. Make sure to promote your content on social media platforms like Facebook, Twitter, LinkedIn, and Instagram, and encourage social sharing to increase your reach.
While rankings are essential, they are not the only measure of SEO success. Instead, focus on creating high-quality, valuable content that meets the needs of your target audience. By providing value to your visitors, you'll attract more traffic, engagement, and ultimately, more leads and sales.
In conclusion, avoiding these common SEO mistakes can help improve your website's rankings and drive more traffic to your website. It's essential to prioritize SEO and continually work on improving it to achieve long-term success in your online business.
]]>If you're deploying your PHP applications as Docker images and you're interacting with S3 as your filesystem with Flysystem, you'll know that your Docker image won't be small. Flysystem is a fantastic library to interact with the filesystem and adding support for interacting with an S3 bucket is very easy, but it comes with the downside of having to include the massive AWS PHP SDK.
The current AWS PHP SDK includes all classes for every AWS service, even if you're never planning on using anything else besides S3. The author of Flysystem (Frank de Jonge) has already marked this as an issue, as the entire AWS PHP SDK is 29mb. You can find that discussion here: Reducing package size.
This massive SDK is difficult to explain when you only need a tiny sliver of the functionality, so let's fix that!
As a response to this, the AWS team has proposed a possible solution, which requires very little from you as a developer, but saves you 28mb!
The AWS team has created a callback that you can add to your composer.json
and removes all the services you don't plan on using: Removing Unused Services.
These are all the changes you'll need to make to your composer.json
and it'll instantly make your Docker image size smaller without breaking any of the great functionality or league/flysystem
:
{
"require": {
"league/flysystem-aws-s3-v3": "^1.0"
},
"scripts": {
"pre-autoload-dump": [
"Aws\\Script\\Composer\\Composer::removeUnusedServices"
]
},
"extra": {
"aws/aws-sdk-php": [
"S3"
]
}
}
The pre-autoload-dump
hook allows you to add a hook to the composer process, which in this case reads the composer.json
file and determines which services you'd like to preserve. It'll remove all other services from the SDK. There are a few services which will always be included, no matter if you use them or not:
The AWS team has marked these namespaces as "unsafe to delete", so they'll always be included in the dependencies.
The configuration in the extra
key contains a list of the namespaces you'd like to preserve.
As I'm only using the S3 namespace, this is the only service I'm listing.
Technically, I don't need to do this, as this namespace will never be removed by this Composer hook, but I like to be explicit about my dependencies.
So if you're trying to make your Docker image smaller and you're using Flysystem with the S3 adapter, be sure to implement this in your project! It'll save you many precious megabytes!
]]>Are you looking to debug your PHP code more efficiently? You can do this very easily with Xdebug 3 and PHPStorm. With this setup, you will be able to step through your code line by line, allowing for better visibility into how your program is executing and helping you identify any issues quickly. In this guide, I will show you how to enable step debugging in PHP with Xdebug 3 and PHPStorm.
These are the steps we're going to do to enable step debugging in PHPStorm:
It's surprisingly simple to enable step debugging for PHP, so let's get started!
The first step to enabling step debugging for PHP is to configure Xdebug 3. In this guide, I assume you've already installed PHP and Xdebug 3, so if you haven't, do so before continuing this process.
First, we'll need to find out which configuration we need to change to enable step debugging for Xdebug 3. An easy way to do this is by running an example PHP application, in my example Laravel, and placing phpinfo();
in the main script (public/index.php
for Laravel). This shows a list of all loaded configurations in the "Additional .ini files parsed" section. Here you'll find a listing for Xdebug as well.
Usually, this is loaded from /etc/php/8.1/cli/conf.d/20-xdebug.ini
. Of course the path depends on your PHP version and operating system. In my case, I'm running PHP 8.1 on Ubuntu, so yours might differ.
Find the path to your Xdebug configuration and open this in a text editor (or gedit, nano, vim) and change the contents of this file from:
zend_extension=xdebug.so
to
zend_extension=xdebug.so
xdebug.start_with_request=yes
xdebug.client_port=9003
xdebug.client_host=127.0.0.1
xdebug.mode=debug
xdebug.idekey=PHPSTORM
This configuration tells Xdebug to send information to port 9003 on your local machine. Now we'll configure PHPStorm to start an Xdebug client on port 9003.
Now that we've configured Xdebug 3, we'll need to configure PHPStorm to receive debugging information from Xdebug. We'll need to set up an Xdebug Client inside PHPStorm. We can configure this by going to the PHPStorm settings: Ctrl + Alt + S
. In the search bar, search for debug
. You'll see the following screen:
In the Xdebug section, be sure that the Debug port
is set to port 9003, and the Can accept external connection
option is checked. Press "Apply" and "OK". Now you're ready to test your step debugging!
You've configured Xdebug 3 and PHPStorm, so now it's time to test your new debugging abilities! Run your PHP development server (php artisan serve
for Laravel), place a breakpoint in your code, and Start Listening for PHP Debug Connections
:
Once you've started listening, you can go to your application in the browser and you should get a notification in PHPStorm. You will need to select the project to assign the notification to, but after that, you'll get nice breakpoints in PHPStorm:
You can now place breakpoints anywhere and start debugging your PHP applications like a programming ninja!
With the help of Xdebug 3 and PHPStorm, you can now debug your code like a pro. You'll be able to quickly identify any issues that may arise in your applications with ease. With a few simple steps, I've shown you how to configure step debugging for PHP so that you can save time and energy when developing complex projects. As long as you follow these instructions, it should only take minutes before start seeing great results from using this powerful toolset!
]]>Do you want a simple way to keep up with all the latest news and updates from your favorite content websites? Well, RSS feeds are here to help! An RSS feed (or Really Simple Syndication) is like an online newspaper that collects stories from multiple sources. It's a great way for readers to quickly find out what's new without having to visit each individual website. And even though it may seem outdated in 2022, adding an RSS feed to your content website can be hugely beneficial in 2023 and beyond. In this blog post, I'll explain why you should still add an RSS feed to your website in this day and age.
These are the topics we're going to look at:
Let's get started!
Have you ever heard of an RSS feed? It stands for "Really Simple Syndication" and it's a great way to get the latest news and updates from multiple sources, without having to go to each website individually. An RSS feed is like creating an XML-based file (called a "feed") that contains all your website's recent content. People can subscribe to the feed, so their favorite news reader will automatically update whenever new content is published.
Back in the early 2000s, RSS was developed by Netscape Communications Corporation as an alternative to other web syndication formats. This version - which was then called RSS 0.91 - quickly became popular with content publishers.
A few years later, software developers further improved the format into what we now call RSS 2.0. This version had more control over how information was displayed to news readers, and it became even more popular because of its flexible nature and ease of use compared to other web syndication formats.
Nowadays, people use RSS feeds for blogs, news sites, podcasts, and more - helping us all stay up-to-date on the latest content from our favorite sources!
Adding an RSS feed to your website is quite easy. If you're using a CMS to maintain your website, chances are you'll already have an RSS feed on your website. You can check this by going to yourwebsite.com/rss.xml
. If you see an XML document, you've already got an RSS feed on your website and you can skip to the next section.
If you don't already have an RSS feed, creating one is quite simple. Let's look at a basic example:
<?xml version="1.0" encoding="utf-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
<channel>
<description>My website name</description>
<title>My website name</title>
<link>https://mywebsite.com</link>
<pubDate>Fri, 23 Dec 2022 12:00:00 GMT</pubDate>
<item>
<description>This is a description of my blog post.</description>
<title>The title of my blog post</title>
<link>https://mywebsite.com/blog/my-great-post/</link> <guid>https://mywebsite.com/blog/my-great-post/</guid>
<pubDate>Fri, 23 Dec 2022 12:00:00 GMT</pubDate>
<media:content xmlns:media="http://search.yahoo.com/mrss/" url="https://mywebsite.com/image.jpg" medium="image" type="image/jpeg" width="640" height="426"/>
</item>
</channel>
</rss>
Each RSS feed starts with an XML version declaration, has an RSS wrapper, and contains channel and item information.
The channel contains information about your website: title, description, URL (link), and the publish date of this feed (pubDate). The items contain similar items: title, description, URL (link), pubDate, and a media object.
There are many more options, but these are the basics of an RSS feed for a blog.
You can find the full RSS 2.0 spec at W3C.
Adding an RSS feed to your website can be hugely beneficial in 2023 and beyond. Here are some of the greatest benefits:
Increased visibility: By providing an RSS feed, you make it easier for users to find new content on your website without having to visit every page manually. This makes it easier for them to stay updated with your latest posts and updates.
Improved discoverability: RSS feeds enable content to be syndicated across different platforms, which can help increase your visibility in search results and boost organic traffic to your website.
Higher engagement rates: By providing an RSS feed, you make it easier for users to stay engaged with your content. As a result, you can increase the amount of time users spend on your website, which can lead to higher conversion rates and a better overall user experience.
Easier content promotion: RSS feeds make it easier for you to promote your content through social media sites like Twitter, Facebook, and LinkedIn. It also allows you to build automation on new items in your RSS feed. For example, this post is automatically published to LinkedIn and Twitter, because the RSS feed has been updated with this blog post.
In short, adding an RSS feed to your website in 2023 can help you increase your visibility, engagement, and overall user experience. So don't wait any longer - start setting up your RSS feed today!
RSS used to be the best way to get your content out into the world. Social media took this advantage away because its content was much easier to digest than setting up an RSS feed. However, with more incredible (no-code) tools out there than ever, the role of the RSS feed is coming back!
RSS feeds can be used for more than just content promotion. They can be used to automate content publishing across different platforms, helping you get the most out of your content and reach a wider audience. You can automatically cross-post your blog posts to many platforms at once, you can synchronize your e-commerce offerings to Google shopping, your Facebook product catalog, and Instagram shopping. The best part is that you don't need to set up API access for this automation! All you need is your trusty RSS feed.
All in all, no doubt adding an RSS feed to your website in 2023 will bring many benefits and help you build a successful online presence.
Like any technology, RSS feeds come with certain drawbacks. The biggest one is that most users nowadays don't use RSS readers. So, you'll have to rely on other methods (such as email newsletters) to notify your followers of new content and updates. Additionally, if you're running a blog or an e-commerce site, you need to make sure your RSS feed is up to date with the latest content. If it's outdated, it can be a turn-off for visitors and they may not engage with your website.
Overall, an RSS feed can be hugely beneficial if you use it carefully and keep it updated. It's best to use this technology in combination with other methods such as email newsletters, social media posts, and blogs to get the most out of your content.
Adding an RSS feed to your website in 2023 can provide numerous benefits such as increasing visibility, discoverability, and engagement. However, you should make sure that your RSS feed is always up to date to get the most out of it. Additionally, you should use other methods such as email newsletters and social media posts in combination with your RSS feed to achieve the best results.
Overall, if used correctly, an RSS feed can be a great tool to help grow your website's online presence and reach more people with your content. So don't wait any longer - start setting up your RSS feed today!
]]>Do you dread setting up a webserver for your project(s) and do you wish you could skip this altogether and it'll still magically work? Then Caddy might be the perfect webserver for your next project!
In this post, we're going to look at a few of the benefits of Caddy that instantly sold it as the replacement for Nginx and Apache. These are the topics we're going to look at:
Let's dive right in and see what Caddy is all about!
Caddy is a webserver, like Nginx and Apache, that routes traffic from the internet to your application, static files, or acts as a reverse proxy.
But it's so much more than just a webserver, it also automatically generates SSL certificates for your website, takes care of caching, and its configuration is hysterically simple compared to Nginx & Apache.
Caddy takes care of the things that you dread doing in projects with Nginx and Apache. Sure, it's not difficult to set up SSL for those webservers, but it's still something you have to think about. Caddy does all this work for you, so all you have to do is point it to your application and it just works.
The Caddy team has put together an excellent illustration of what Caddy is in technical terms:
So how does Caddy compare to other webservers like Nginx and Apache? I think the most important thing to look at for most developers is performance. How fast is it compared to the other webservers? There are a lot of benchmarks on the internet that compare Caddy with Nginx, for example 35 Million Hot Dogs: Benchmarking Caddy vs. Nginx. This benchmark shows something very interesting:
Reading that benchmark is worth your time if you're interested to learn more about the performance comparison.
I've unfortunately not been able to find a good benchmark that includes Apache as well. If I do happen to find one, I will update this and post a link to the comparison.
So if you look at the performance of Caddy compared to other available webservers, we can see that it's not just comparable with Nginx, but sometimes even outperforms it. The biggest takeaway from reading that benchmark is the fact that Caddy is so fast with a "stock configuration". You don't have to spend time trying to optimize your configuration to get excellent speeds like you would have to with Nginx. This is what I want! I want the webserver to just work and get out of my way. Caddy accomplishes this for me!
I've already mentioned a few reasons why you should use Caddy for your next software project, but let's list them again:
Let's add a few excellent benefits to using Caddy for your next project!
First of all, Caddy doesn't have any dependencies. It's a compiled Go binary that can run anywhere. This makes it ideal for using it in a container in your Docker or Kubernetes environment.
Secondly, it offers things like load-balancing traffic to backend services, health checks, circuit breaking, and caching out-of-the-box. For me, this is a clear benefit, because this saves a lot of time...time you can spend on building your application.
Lastly, the biggest benefit of using Caddy is that it has built-in directives for using fastcgi. This helps you to route traffic to your PHP application with a single line in your configuration file.
But how do you get started with Caddy? Let's find out!
Getting started with Caddy is simple! You can use the CLI to serve your configuration and start a server right then and there, or you can use Docker. I prefer to use Docker because it fits perfectly in my projects like that.
Let's look at the configuration to serve traffic to our PHP/Laravel application:
(laravel) {
root * /var/www/html/public
encode zstd gzip
file_server
}
(redirect_clean_url) {
handle_path /index.php* {
redir {uri} permanent
}
}
:8000 {
import laravel
import redirect_clean_url
handle {
try_files {path} {path}/ /index.php?{query}
php_fastcgi my_php_fpm_backend:9000
}
}
The code block starting with (laravel) and (redirect_clean_url) are snippets, essentially reusable blocks of configuration that you can apply to any configuration by importing them: "import laravel" and "import redirect_clean_url". If you've hosted a Laravel application before, you'll know that you have to route all traffic to the index.php file in the public folder of your project. The "laravel" snippet sets the document route to that public folder, enables gzip and zstd encoding, and tells Caddy to serve static files from the public folder (images, css files, etc.).
The redirect_clean_url snippet is an important snippet to improve the SEO of your application. Routing all traffic to your index.php file is great, but it also causes a problem: both https://example.com and https://example.com/index.php are valid paths. This causes ugly URLs (https://example.com/index.php/blog for example) and Google could mark this as duplicate content. To prevent all of this, we want to redirect all traffic from /index.php to a clean URL without index.php in it. This snippet takes care of redirecting /index.php/blog to /blog. It does this with a 301: take note of the "permanent" keyword.
The ":8000" code block is our actual website, in this case, localhost:8000. You can also replace this with example.com and it'll automatically generate SSL certificates for your domain and serve it through HTTPS. In the handle code block, we're telling Caddy to look for static files first and if it can't find those, redirect the traffic to the index.php in the public folder. When the traffic is routed to PHP, you can specify the FPM backend using the "php_fastcgi" directive. In my case, this routes the traffic to a PHP-FPM container in the same docker environment, but this could also be http://127.0.0.1:9000 or any other place you're running your FPM server.
Now you've got a fully functional webserver for your PHP application including an SSL certificate. That's really it. You're done. It's that easy.
The simplicity of this configuration file, the excellent performance of the web server, and the fantastic documentation make Caddy my preferred webserver.
Caddy is an excellent web server for several reasons: it's comparable to Nginx in terms of speed, its configuration is much simpler, and it doesn't require any thought about SSL certificates. It takes all the tedious work away when setting up a webserver and helps you to get back to developing your application or website! If you're looking for a new web server for your next project, look no further than Caddy.
]]>Have you been optimizing your PHP application but hit a wall and can't seem to get any more performance?
I've got a little (known) trick you can try! This simple addition increased the performance of one of my PHP application from 16.5 req/sec to 83 req/sec and even works in a Docker container!
What is this trick? OPcache!
OPcache is a PHP extension that compiles and caches your PHP scripts, so it can use the cached version of your application when you run the program. This is a huge speed improvement, because PHP is not a compiled language. Normally, PHP compiles your application and then executes it every time you do a request on your application. With OPcache, we can skip the compilation part of the request cycle.
I run all of my PHP applications in Docker, so I will focus on how to make this work for a Docker image. You can replicate most of the steps for your own system, so if you're not using Docker, you can still follow along.
This is the basic configuration we're using for our OPcache extension.
This config file should replace the default opcache.ini at: /usr/local/etc/php/conf.d/opcache.ini
.
[opcache]
opcache.enable=1
opcache.revalidate_freq=0
opcache.validate_timestamps=0
opcache.max_accelerated_files=10000
opcache.memory_consumption=192
opcache.max_wasted_percentage=10
opcache.interned_strings_buffer=16
opcache.fast_shutdown=1
This configuration will cache the compiled PHP scripts, even if you've made changes to them. For these changes to show up, you'll have to restart your PHP-FPM daemon. However, if you're running your PHP application in Docker, you won't have to worry about this. You should create a new Docker image when you make changes anyway, so you always start in production fresh after your made your changes.
If you want to use OPcache on your development environment, you'll want to see any changes you make right away, so you can set opcache.enable=0
or let the cache invalidate itself by using: opcache.validate_timestamps=1
.
Let's install OPcache on a php-fpm:alpine image:
FROM php:8.1-fpm-alpine
# Install build dependencies and the OPcache extension
RUN apk add --no-cache $PHPIZE_DEPS \
&& docker-php-ext-install opcache \
&& apk del $PHPIZE_DEPS
# Copy the opcache.ini into your Docker image
COPY docker/php/opcache.ini /usr/local/etc/php/conf.d/opcache.ini
# Run your application
CMD php-fpm
This is a very minimal Dockerfile and might not even compile, but it does show the steps you'll need to take to enable OPcache in your application.
With this configuration, I've sped up my PHP application by 5x, increasing the requests per second from 16.5 to 83. I'm very satisfied with the results, so I hope it works for you as well.
]]>
MailChimp is a great SaaS platform for sending e-mails to your subscribers and work with automation to sell to your mailing list. However, once you get a sizable audience and you want to customize your e-mail interactions, you'll be hit with a hefty price tag. Mailchimp is very expensive! If you've got a mailing list of 2000 contacts and you're on the cheapest paid plan, you're already paying $34 per month. This is a great option for a marketing team that has no access to a software developer, but I am one of those software developers.
Before we get started with the migration, let's introduce the 2 heroes of the story: Postmark and Temporal. If you haven't heard of these two before, here's a quick introduction. If you have, you can skip to the sections that you want to read:
You might have heard about Postmark before, it's quite a well-known e-mail platform that allows you to send transactional and marketing e-mails, and even deal with incoming e-mails. In my experience, its delivery rate is better than a few other options out there. One of the biggest benefits is the fact that you can create templates in code and synchronize those with Postmark. This is a very important aspect for this entire process, because this allows me to send e-mails using a REST API.
The second piece of software that makes all of this possible is Temporal. Temporal is software that keeps track of the state of your code. In simple terms, it executes each line of code only once and can resume code execution after system crashes. This makes your code incredibly stable. If that hasn't convinced you of its power, perhaps the following example will clarify it.
Example
If you have a banking application and your customer is sending a payment to another customer and your program crashes halfway through: what happened to the payment? What will happen when you restart the application? Will this automatic process charge your customer twice? Temporal knows where the application was in its code execution and resumes from that exact line. Temporal will help you to never charge your customer twice, not even after a crash.
Another great benefit of this state machine is that you can let processes take weeks or months if you want to. In normal code, you can put some type of delay (sleep, time.Sleep, etc.) in your code, but you wouldn't think of doing this for 14 days. Usually, you'd only delay your code by about 10 seconds or so.
But why is this important? This is important, because it drastically simplifies your code. Charging a customer for a subscription is now just a for-loop with a delay of 30 days after the charge. 30 days later, check if the customer is still subscribed, charge them, and delay for another 30 days. It's really that simple. Keep this in the back of your head, because this will become very important in this the scheduling of e-mails to send to your contacts.
One last thing to highlight before we can get started! You can schedule code execution based on a cron schedule. This will become the heart of the newsletter application. Enough talk, let's get to some code!
If you've ever sent e-mails from your code, you might have worked with local templates and sent ready-made HTML e-mails to your contacts. This is always a pain and I can't even begin to count the number of times an e-mail just doesn't look good on a mobile device or looks a little off on Gmail in Safari. Creating emails in HTML just takes you back to the "Good ol' HTML 4 days" where everything is a table and nothing is responsive. Let's not even try to attempt this and let Postmark handle this for you.
If you use the mailmason starter I've also used as a base, you'll see minimal HTML tables and mostly just simple HTML content. You can style your emails using simple CSS files and Mailmason will even generate the plain text versions of your emails.
After you've synchronized your templates with your Postmark server, you'll have templates with easy-to-use template aliases. You can use these template aliases, alongside the variables you'll need for your templates (first name, etc.) and send them as JSON to Postmark to send your email. Sending emails with a simple REST API call is a fantastic feeling.
For the sake of this post, let's imagine we're using a template like this:
<h1>Hello!</h1>
<p>
I hope you've had a great week! This is what I've written for your this week, I hope you enjoy!
</p>
<img src="{{image_url}}" >
<br />
<h1>{{title}}</h1>
<p>{{description}}</p>
<a href="{{url}}" class="cta-button">Read "{{title}}"</a>
<p>
I'm looking forward to sending you the next email soon, take care!
</p>
As you can see, we've got a few variables we can fill in using the REST API:
Let's give this template the alias "weekly-newsletter".
This is a very basic email template with minimal styling that'll be stored on the Postmark servers, rather than in your own application.
If you're interested in learning more about mailmason and my automatic process for synchronizing templates with Postmark, please reach out to me.
Let's move onto the heart of automation: Temporal.
Temporal, the state machine, has a concept called workflows. A workflow is a collection of individual steps that perform a (more complicated) task. A workflow should be deterministic, which means it has the same result every time it executes with the same input data.
Any nondeterministic behavior should be moved to an activity, because the result of an activity is recorded in the workflow history. You can interpret an activity as a step in your workflow. Some examples of code that's great for an activity is: sending an e-mail or fetching something from an API.
When we break this down for sending a newsletter, you can see a workflow as these steps:
During this workflow, I want to skip the code as quickly as possible and check again next week. "Checking again next week" highlights one of the features Temporal offers for workflows: Cronjob workflows. These Cronjob workflows can be executed whenever you want to, in my case every saturday at 15:00 (3pm). This cron schedule looks like: "0 15 * * 6".
Starting (scheduling) a workflow in my code, which is written in Go, looks like this:
// Create an easy-to-use workflow ID
const WorkflowID = "newsletter-%s"
// Register the workflow with a string name
w.RegisterWorkflowWithOptions(EmailNewsletterWorkflow, workflow.RegisterOptions{
Name: "email.newsletter",
})
type SubscribeRequest struct {
Email string `json:"email"`
}
// Execute the workflow with a cron schedule
if _, err := s.client.ExecuteWorkflow(ctx, client.StartWorkflowOptions{
ID: fmt.Sprintf(WorkflowID, "hello@example.com"),
TaskQueue: Queue,
CronSchedule: "0 15 * * 6",
}, "email.newsletter", SubscribeRequest{
Email: "hello@example.com",
}); err != nil {
return err
}
There is a lot of code that I'm omitting in this example, like setting up a Temporal worker and configuring the Temporal client. I'm also hardcoding the e-mail address in this example for simplicity's sake. The input data for the workflow, "SubscribeRequest", has json tags, because temporal stores this input data in the database and by specifying the keys it should use as json, you avoid some rare encoding and decoding issues.
I'm highlighting the easy-to-use Workflow ID, because this will make it easy for us to stop the workflow in case the contact is unsubscribing from the mailing list. By specifying this workflow ID, you also prevent the workflow from running multiple times, in case someone accidentally (or on purpose) signs up for your mailing list multiple times. If you don't specify the workflow ID, it'll be assigned a random UUID, which makes it very difficult to cancel the workflow without saving this random workflow ID in another database.
We've seen that we can execute a workflow based on a cron schedule, but the workflow doesn't do anything yet. Let's change that! In the workflow, we'll need to do 2 things:
This workflow could look something like this:
type Post struct {
Title string `json:"title"`
Image string `json:"image"`
Description string `json:"description"`
URL string `json:"url"`
PostedDaysAgo int `json:"posted_days_ago"`
}
w.RegisterActivityWithOptions(FetchLatestPost, activity.RegisterOptions{
Name: "post.fetch-latest",
})
w.RegisterActivityWithOptions(SendPostmarkTemplate, activity.RegisterOptions{
Name: "email.send",
})
func EmailNewsletterWorkflow(ctx workflow.Context, config SubscribeRequest) error {
// We want to retry both of these activities a maximum of 10 times.
ctx = workflow.WithActivityOptions(ctx, workflow.ActivityOptions{
TaskQueue: Queue,
StartToCloseTimeout: time.Minute,
RetryPolicy: &temporal.RetryPolicy{
InitialInterval: time.Second,
BackoffCoefficient: 2.0,
MaximumInterval: time.Minute,
MaximumAttempts: 10,
},
})
var latestPost Post
// Call an endpoint within an activity to fetch the latest post
if err := workflow.
ExecuteActivity(ctx, "post.fetch-latest").
Get(ctx, &latestPost); err != nil {
return err
}
// We want to avoid sending emails if there wasn't a new post within the last 7 days
if latestPost.PostedDaysAgo > 7 {
workflow.GetLogger(ctx).Info("latest post is posted more than 7 days ago, skipping...")
return nil
}
if err := workflow.
ExecuteActivity(ctx, "email.send", Config{
TemplateAlias: "weekly-newsletter",
Email: config.Email,
From: "info@roelofjanelsinga.com",
MessageStream: "newsletter",
TemplateModel: map[string]interface{}{
"title": latestPost.Title,
"image": latestPost.Image,
"description": latestPost.Description,
"url": latestPost.URL,
},
}).
Get(ctx, nil); err != nil {
return err
}
return nil
}
This workflow fetches the latest post and checks whether it was posted in the past 7 days. If there hasn't been a new post in the past 7 days, the workflow returns and schedules a new workflow for next week.
If there was a new post in the past 7 days, the workflow executes an activity called "email.send" with the data the activity needs (Config)
This activity calls the Postmark API and sends the template with alias "weekly-newsletter" to the contact we've given to the workflow. In this post, we hardcoded this email as "hello@example.com". The TemplateModel map in the configuration are the variables you defined in your Postmark template. You can also choose to use a struct rather than a map, but I'm reusing this "email.send" activity for every workflow that sends emails, so this makes it easier to use for my application.
The email sending activity is quite straightforward and just converts the Config into an API call to Postmark:
type PostmarkResponse struct {
To string
SubmittedAt time.Time
MessageID string
ErrorCode int
Message string
}
type PostmarkFailure struct {
ErrorCode int
Message string
}
func SendPostmarkTemplate(_ context.Context, config Config) (*PostmarkResponse, error) {
client := sling.New().Base("https://api.postmarkapp.com")
var success PostmarkResponse
var failure PostmarkFailure
_, err := client.
Post("/email/withTemplate").
Add("Accept", "application/json").
Add("X-Postmark-Server-Token", "your-token-here").
BodyJSON(Body{
From: config.From,
To: config.Email,
TemplateID: config.TemplateID,
TemplateAlias: config.TemplateAlias,
TemplateModel: config.TemplateModel,
MessageStream: config.MessageStream,
}).
Receive(&success, &failure)
if err != nil {
return &PostmarkResponse{
ErrorCode: failure.ErrorCode,
Message: failure.Message,
}, err
}
return &success, nil
}
We can now subscribe our contacts to our mailing list and send them a weekly email! Unfortunately, not every contact will stay subscribed indefinitely, so we'll need to handle unsubscribes. Let's see how we can do that!
One of your contacts wants to unsubscribe from your mailing list, that's too bad! However, it's not difficult to implement this in our application, because we've thought about this when we started our initial workflow!
Remember the workflow ID? That very nice and easy to use workflow ID? That's going to make this unsubscribe process much...much easier!
When we executed the workflow for this contact, we've customized the workflow ID to contain the contact's email: newsletter-hello@example.com. Now, all we need to unsubscribe a contact from our newsletter is their email.
Since we're using a workflow with a cron schedule, we can't just cancel the workflow, because this causes a new workflow to be scheduled. We want to stop the workflows from being scheduled after the contact unsubscribes, so we'll need to terminate the workflow.
This is what it looks like:
type UnsubscribeRequest struct {
Email string `json:"email"`
}
func (s service) Unsubscribe(ctx context.Context, payload UnsubscribeRequest) error {
err := s.client.TerminateWorkflow(ctx, fmt.Sprintf(WorkflowID, payload.Email), "", "unsubscribed")
if err != nil {
s.logger.Error("Error terminating workflow", err)
return err
}
return nil
}
The empty string that we pass to the TerminateWorkflow method represents the RunID. By leaving this RunID empty, Temporal assumes you want to terminate the latest run for this workflow.
In the code, you can also see "unsubscribed", this is the reason of termination. It's an optional thing to add, but it's nice if you're working in a team and wondering why your workflow is terminated.
In this post, I've described how I've migrated my newsletters from Mailchimp to Postmark + Temporal. Now, with Postmark and Temporal, I've got complete control over who, how, and what to send to my mailing list. The upside is that I can still do all things I could with Mailchimp, for a fraction of the cost. The downside of building all of this yourself is that you'll need to have technical expertise and solve any issues that arise by yourself.
I can live with those downsides, because working with both Postmark and Temporal is a delight! If something is unclear, Temporal has a great community to help you out.
If you have any questions about this process, don't hesitate to reach out!
]]>
Copywriting is an art AND a marketing strategy, but it's often overlooked as an opportunity to sell a story, sell a product, and sell a service. Copywriting is essential for SEO and to explain what it is you're selling and how it fits into the world. So how do you do it effectively? Let's find out!
In this post, I'll explain 2 things: why copywriting is essential for SEO, and how it can sell your products for you. After reading this post, you'll have clear steps that you can take to start writing new copy or improve your current texts.
Let's get started with why copywriting is essential for SEO and why you shouldn't take it lightly.
Copywriting is essential for SEO, because it makes your website searchable and helps to clarify why your product or service is needed. Without supporting content, you leave your potential customers with more questions than answers. Why would your potential customers pay for what you sell if they don't understand what it is you're selling? Good copywriting explains what it is you do or sell, which gives you a boost in Google for the relevant search terms. You want to be found for search terms that are relevant for your business, otherwise you're not attracting the visitors that will become customers.
Search engines these days are very clever in creating context for your website, figuring out how it fits into the internet. If your copywriting helps to explain your products and services, giving context to your business, you're "helping" the search engines figure you out. This gives you the opportunity to influence the search engines by providing your own context of who you are. But what does this mean? Let's figure that out in the next section about how you can use copywriting to actually sell your product or service.
In the previous section we've seen that copywriting is essential for SEO. But now you might ask yourself: how do you write copy that sells? The key is to be crystal clear and remove any nuance, regional slang, and overall vagueness from your copy.
You can improve your SEO and sales when you're crystal clear with your copywriting about what it is you do. There is no place for nuances and vague copy in SEO: Be crystal clear about what you do or sell. Example: "We'll make your car shine again" becomes "We'll clean your car for $5". This sets the expectations upfront and is also an SEO-friendly title.
Crystal clear copy is the first step, but it's not selling anything if you're not telling a story. Telling a story with your straightforward copy might sound like a waste of time at first, but it's the missing ingredient that will sell your product or service. Storytelling takes your potential customer on a journey along 4 steps:
If you skip from step 1 to 4, it feels like a cold call: "Hey we have product X, buy it now for $99". You're most likely going to say: "No thanks, I'm not interested". Even if product X could've solved all of your problems, the seller never told you exactly which problems it solves and why it will be a great fit for your situation. There is no relationship between the seller and the buyer. The seller doesn't know the buyer's situation and is guessing you'll want it. The buyer doesn't trust the seller's recommendation, because this entire situation is based on a huge gamble.
A website is one-way traffic: The buyer interacts with your website, but you (usually) can't directly interact with the buyer. This looks a little like the cold call from above, but you can use copywriting to create trust between you and your potential customer in this one-way interaction. How do you build trust in a one-way relationship? Let's find out!
In a one-way relationship with your customer on your website, you can still build trust between you and your visitors. You should put all of your cards on the table, by answering all of these questions in your copywriting:
A great rule of thumb for answering these questions is this: The more specific the better. The order of questions is deliberate: You want the potential customers to feel seen before you ask them to buy from you. But remember: You HAVE to ask them to buy from you! You have to give them the option to buy or reject your offer.
In this post, we've seen why copywriting is essential for SEO and selling your product or service. Copywriting helps you sell your product when it's crystal clear, tells a story, and builds trust with your potential customers. I've outlined how you can improve your copywriting by providing you with specific questions you'll need to answer. Good copywriting can take your website from struggling to bringing you sales, but you have to prioritize it.
]]>
If you've felt that your business or website isn't getting the search traffic you think it deserves, you're not alone. Many business owners have this feeling and hire SEO specialists to help them achieve their goals.
However, even with the greatest SEO specialist, it's still a great idea to understand how SEO works and why what you want might not be what's best for your business. Let's explore the benefits of playing the SEO long game and why it might be a much more effective marketing strategy for your business.
Let's skip the SEO hacks, because they're often very expensive and don't benefit you or your business in the long run. Play the long game!
The SEO long game is simple: You're investing heavily (time, effort, and capital) into making your website as helpful as possible. In practice, this means you're going to have to do 3 things well:
This SEO strategy won't be built onto quick SEO wins, but will gift you with exponentially growing traffic as time goes on. I know how vague and frustrating this sounds, there is simply no specific timeline for this strategy. There is no timeline, because Google & friends change their algorithms daily, which can benefit you sooner or make you wait longer.
This SEO strategy is a bit like investing in the stock market: You look at the long-term picture. Google can reward or punish you from day-to-day, but if you keep up this strategy you will have massive long-term benefits.
Enough concepts, let's break down the 3 things that you have to do well to play the SEO long game.
How will people find your business or service if you never explicitly tell your audience what you do? The harsh answer is: they won't find you.
For example, if your business cleans cars and all your website says is "We make your car look like it's brand new", people won't find your business. Your audience will search for terms like "car cleaner near me" or "clean car quickly". A better title/headline would be: We clean your car to make it shine in 15 minutes. Be crystal clear about what it is you do, put it into plain words.
Your website sells your products and/or services, so you should make it very obvious what it is you're selling. You can use all the catchy words you'd like to describe your products or services, but these aren't words your audience will look for. Your audience won't search for "make my car sparkly clean", so think like your audience and use words they would use.
If you can't answer the simple question "How do I help my customer?", you're not ready to optimize your SEO yet. If you don't know how you help your customer, how will they know?
Before you move on to the next step in the SEO long game, you need to have a very clear answer to this question. Your answer should use plain words only and should be a maximum of 2 sentences.
A great way to test your sentences are to tell someone those 2 sentences and then ask them what you do. If they can explain to you what you do, you're ready to move on.
Do you know what Google loves to do? Google loves to help people by answering questions and bringing them to sources that help to answer those questions. Ideally, Google wants to answer these questions as efficiently as possible.
Google & Friends love to see user flows where the user goes from a vague idea to a very specific question. When someone goes through this journey, they're learning something new along the way to help them ask better questions. The more specific the question, the better you can help.
This is what Google loves to see:
This flow of actions means that your website taught the visitor something and helped them to better understand what they really need. Google will now mark this page as useful for the next person to come by.
However, Google also knows if you're not successful in answering the question of this visitor. When you've failed to be helpful to your visitor they will try to learn more on another website.
This is what a non-successful flow looks like:
Apparently the visitor didn't find what they were looking for on your website and tries again with another website. Google now knows that the page wasn't helpful for this search term and will give it less priority next time someone searches that term.
Now we know how Google finds good and bad content and how it ranks these pages, you'll start to see a feedback loop take shape. If you're successful in helping your visitors and Google sees that you're adding value to their search experience, they'll reward you.
If you're consistently helping their algorithm by answering questions, you'll get in a positive feedback loop. This positive feedback loop looks like this:
You want your website to get into this positive feedback loop, because it's almost passive marketing. Once the content is out there, you won't have to look at it too much again. You will have to update it if it becomes outdated, otherwise this feedback loop will start to work against you. If your content is outdated, you're no longer helpful and slowly lose the traffic you worked so hard for.
You'll end up in a negative feedback loop. This is what a negative feedback loop looks like:
This negative feedback loops is the best argument that you shouldn't use SEO hacks to help your pages rank. Algorithms change, but great and helpful content performs well and survives algorithm changes more easily.
Everyone knows Google likes fast websites with a great user experience (UX). That's why business owners often put most of their effort in this part of SEO. You can have the fastest website in the world, but if you're not adding any value to your visitors search experience, you won't grow as fast as you could. Google won't have a good reason to rank your pages, so your audience still won't find you.
Slow websites aren't great for search rankings, but it's not the only factor that goes into great SEO. That said, you should still do everything you can to get a solid base, a fast website. A fast website is always a great thing, but that alone won't sell your products or services.
When you've got the fast website, you can focus on the things that will actually sell your products or services: crystal clear words and showing your expertise. These 2 parts help people find you and help convince them you're the best at what you do and they absolutely need to hire you.
Do you want to build your own SEO friendly website to grow your business and get the results you want? That's fantastic, you won't be disappointed!
If you're unsure where to start or just want someone to help you achieve your business goals, I'm for hire! You'll get to focus on what you do best: running your business and focus on sales. I will help you to implement the positive feedback loop I've described in this article and lift your website to the next level.
]]>In November (2020), I started to work with MQTT to set up a few smart devices in Home Assistant. I described how to create a simple MQTT switch in Home Assistant. That process works really well, but it requires manual work. If there is anything I don't want to several times, I will find a way to automate the process. Luckily, Home Assistant has an amazing feature that helps you to automate this process: MQTT discovery.
In this post, I'll go over a few things that I've done to use this feature to automate my MQTT devices. I will be sharing my Arduino program in case you're looking to use this for your own projects as well.
These are the topics I'm going to write about:
Let's go to it! This feature is one of my favorite discoveries recently, so I hope you enjoy it as well!
MQTT Discovery is essentially a way to tell your MQTT broker, Mosquitto running inside of Home Assistant in this case, which topics it needs to listen to. In the post that I've linked to in the introduction, I've explained how you can manually tell Mosquitto to listen to a certain topic. This works fine, but if you have more than 2 or 3 devices, this can get old really quickly.
When you use MQTT Discovery, the MQTT device, in this case an Arduino, sends a message to a discovery topic on the MQTT broker telling it exactly which topic it should listen to for messages. This way, your MQTT device announces itself to the broker, without you having to manually configure the broker. Practically, this means that your MQTT broker will know about any and all devices that have announced themselves, without you having to do anything.
Home Assistant has such a discovery topic built-in: MQTT discovery topic. My main use of the MQTT devices are plant sensors. In total, there are 3 sensors on the device right now. Each of these sensors have their own "state", which means they all need their own discovery topic as well. In my case, my discovery topics are:
If you were to register each of these sensors manually for each MQTT device, you would do more configuration than actually enjoying your smart devices. By using MQTT Discovery, all I have to do is connect the Arduino to a power source and the MQTT broker instantly knows about the device and its various sensors. Pretty cool!
So what does it actually look like to code MQTT Discovery on your Arduino or NodeMCU? Well it's actually quite similar to coding it manually in Home Assistant. Instead of configuring each sensor in Home Assistant, you're configuring it in your Arduino program and sending that to Home Assistant.
In this code sample I will skip over a few things. I won't discuss these specific things, as they will differ for you:
I'll discuss each of these topics in separate posts, as they're not relevant for this topic. I will, however, include them in the code sample for the full picture. Enough talk, let's see some code!
Sending the discovery message requires a few things:
I won't go into specifics here, but you can find the working code for this in the full code sample below.
I break my disovery messages into functions, so I can group them together while not pollution the heart of the Arduino program. This is a discovery message for the temperature sensor of this certain device:
As you can see, this script specifies a value_template for this specific sensor. I'm sending the entire state of this MQTT device, which includes the values of 3 sensors, as a JSON object to Home Assistant. Home Assistant doesn't know what to do with that, so we need to tell it how to get the value of the temperature sensor from this JSON object. "value_json" parses the incoming JSON string as a JSON object, so we can use the dot notation to get nested values. I'm using a pipe to specify a default value (0 in this case) in case there is no moisture sensor or there is something wrong.
By sending this JSON object to the MQTT discovery topic, Home Assistant knows what to do with the messages it receives.
For more context on how this discovery function fits in the whole program, I'm including my full script below.
As promised, here is the full script:
It might look a little strange, because the loop function is empty. This is because I'm using the "Deep sleep" mode of this NodeMCU to preserve energy. After this script has sent all sensor data to Home Assistant, the NodeMCU turns itself off and back on after 60 seconds. This way, I can use normal batteries to power this device and have them last much longer.
Again, this is my personal use for it and it might differ from you own uses.
MQTT discovery has made my experience with MQTT devices infinitely better. I don't have to worry about manually registering devices in Home Assistant any more, because the devices register themselves now. This leaves very little room for typos and other human error. It makes using new devices as simple as powering it on and leaving it alone. It's great to combine this with something like Grafana and InfluxDB, because you'll see the sensor values show up right after you plug in your device.
I hope this was helpful to you! I will be posting more about some of the topics I've skipped in this post, like Wi-Fi connections for the NodeMCU.
]]>Improving your UX and SEO for your website takes time and effort. You need to give your content some context to help your readers and the search engines understand what it's about. What if you could automate this and do more with the same amount of effort?
3 months ago, I've written about using Neo4j, a Graph Database, for SEO and UX purposes. Neo4j might be complete overkill for the relatively small amount of data I put in it, but it has boosted my own productivity and internal link building strategy. Here's why!
With a Graph database, you can make...well graphs. By linking pieces of content together, I've created a large cob web of content with dependencies, as you can see in this screenshot:
As you can see, there are many, many connections between pieces of content. Each of these connections means that a piece of content is related to another piece of content. You can see where I'm going with this right? These connections help me to related page, which creates a giant web of internal links. These internal links help search engines to make sense of your content and give it context.
In a way, you're helping these search engines provide some context to your content. You don't need a database like Neo4j for this at all, you can do this manually as well. But it makes it a whole lot easier. By having a Graph database in place, which focuses on relationships between nodes (Plant, Article, Page, etc), you can quickly generate relevant internal content.
If you've written a lot of content for a longer period of time, you'll know this feeling: "What do I write about now?". With a graph database, this becomes much easier.
If you can't think of anything to write about, you can start connecting the circles in the screenshot from earlier. Say you have Page A and Page B, they're great pages, but they have nothing in common, nothing connecting to them. You can now draw a line between Page A and Page B, which is called Page C and it'll serve as the common ground.
Before, you couldn't link from Page A to Page B, because they we're so different, but now you can link to it indirectly. You can link from Page A to Page C and then to Page B and vice versa. You've created content context, which is great for your readers, but also for the search engines trying to make sense of your website.
Again, you can do all of these things by hand. But doing this takes a lot of time and you will miss some connections that might be obvious to an automated system, You want great UX, you want great SEO, but you'll need great content for that. If you spend all your time manually creating connections between pieces of content, you'll never have time to actually write that content.
If you use Neo4j for your content, it will probably be massive overkill, but it will improve your workflow. At least it has for me. My normal writing process hasn't changed at all, but the impact it has on UX and SEO has been far greater. And that with the same effort as before.
]]>If you've ever talked to me about anything tech related, you'll know that I use Linux for both as my personal and professional computing needs. Linux gives me the freedom I want to write software and use a computer on a daily basis.
Over the past 4 years, I've used the Ubuntu distro as my personal and professional OS. Now, after having enjoyed EndeavourOS, but wanting a little bit more ease-of-use, I've installed Manjaro Gnome edition on my PC.
I'm not ditching Ubuntu though, I'll still use that at work and I still really enjoy it. However, the latest versions of software that you get with a rolling distro like Manjaro is something I'm excited about! I like getting the new software and I like the fact that the Manjaro project went the extra mile to install all kinds of useful Gnome extensions. It's a minor thing, but it's a nice gesture. I can use the same Gnome that I'm used to from Ubuntu, but also get the latest software updates instantly: win, win!
I decided not to roll my own environment with Arch, because I just want to get stuff done. Arch is great if you want ultimate control over everything that goes into your system, but I have stuff to do.
I left Windows a long time ago, because I wanted to get stuff done. When you get a fresh installation of Windows, you have to go through 20 minutes of setting up your system. When you're in a rush, Windows decides it's time to install 99 updates and you can't use your own PC for a while. This is why I switched to Linux: productivity.
This same reason is why I went with the complete package over the building kit and picked Manjaro.
I've been using Manjaro quite extensively for the past 3 weeks and I've noticed a few things. Many things we're great and some things we're less great. Let's talk about those.
The greatest thing about Manjaro are the fact that you're getting stuff done within minutes after installing the OS. Then you have the amazing up-to-date packages and the AUR. Everything I need I can install with a few simple commands. This is where my experience with Ubuntu comes in handy, because you guessed it: I'm also using snaps. It's all about getting the things I need to do my work as quickly as possible.
The included applications are great, especially the Pop shell extension and "Web Apps" where you can use websites as apps. All of these little things made it very easy to customize MY PC to what I want.
There were a few things that I noticed that I had some trouble with. None of these points have anything to do with Manjaro itself, but my with my lack of experience with it. First of all, I miss the utilities I had with Ubuntu to be able to switch the default PHP version the system uses. This is not a flaw in Manjaro, but simply a great point for Ubuntu.
Installing and enabling PHP extensions was also a strange experience. When you install PHP extensions on Ubuntu, they're automatically enabled. This is unfortunately not the case for Manjaro, in my experience. After figuring that "quirk" out, it was easy enough to get everything working properly.
Again, these points are nothing against Manjaro itself, but have to do with my lack of experience with the OS.
Overall, I'm really enjoying my first Manjaro experience and I already feel confident using it every day. The advantages outweigh the challenges for me, so I will keep using it as my personal OS. You should give it a try as well and see if it works as well for you as it does for me.
]]>Software developers are notorious for making life more difficult for themselves than necessary. I'm no exception to this, but it's getting better the more experience I get. KISS, or "Keep it simple, stupid" is a a design principle, but doesn't just apply to design and UI/UX. It also applies to software development.
Overcomplicated applications are a joy to build, but a nightmare to maintain and debug. Writing complicated applications using simple and clean code is a skill that you learn over the years. Easy-to-understand code is easier to maintain and debug. Simple code, despite the name, is not simple to write and takes experience.
Do yourself a huge favor and keep your applications as simple as possible. Make sure to document your thoughts when writing code. Ideally, you won't need to add comments to your code, because it's easy to understand. Your thought process and reasoning for writing that code, however, is something you should write down somewhere.
But only writing simple code is just 1 step. If you truly want to maintain your codebase for a longer period of time (years), make sure to keep your infrastructure as simple as possible as well. The fewer moving parts the better. If you can understand the whole stack quickly, you can write new features more quickly, fix bugs, and debug any problems.
Writing simple code is one of the best skills you can have as a software developer. Simple code is easier to extend, debug, and maintain. "Keep it simple, stupid" doesn't just apply to design, but also to software development. Knowing this principle will make you a better developer.
]]>Internal link building is a great way to signal to search engines which pages are the most important on your website. However, you can also use internal links as a way to group content and give individual pieces of content more context. If you have a blog with 5 posts, this process isn't very difficult and time-consuming. However, when you have over 25 posts this becomes increasingly difficult and you need to look for some ways to link pieces of content together automatically.
In this post, I'm going to highlight the progress I've made with a project of mine: Plant care for Beginners. I'll go over the 3 iterations of internal link building I've implemented in the past 18 months and how I've tried to make increasingly more relevant links between pieces of content.
These 3 versions are:
Let's get into the 3 iterations of link building and why I chose to move from one to the other.
Back to top When I started linking guides together, I only did this to keep visitors on the website by giving them more content to read. Prior to this, the content was the content and there weren't any suggestions for "Other content you might enjoy". To start out, I've listed the 3 most recent guides under every other guide. This way, readers had somewhere to go after the finished reading the guide.
It was an improvement on the current situation, but it didn't benefit the SEO rankings of the website as much as I expected it would. Why? Well, every time a new guide was published, every guide was linking to the new guide, whether it was relevant to the current topic or not. This might give the new guide a little bit of "link juice", but more often than not, readers wouldn't click on the link to the new guide, because it wasn't relevant to the content they were reading at that point in time. The new guide didn't offer them what they wanted or needed, so the "link juice" was wasted.
Back to top A logical next step was to manually suggest related content. The easiest way to add some sort of relevancy to any piece of content is by adding tags, a lot of them. By adding tags, I was able to create groups of related content. The more tags two guides have in common, the more relevant they likely are for each other.
I implemented a tag-based relevancy model and the results were quite good. Most linked guides had something in common with the guide the reader was currently on. However, purely matching based on tags and sorting which guide had the most tags in common is still not completely accurate. More than a few times I had seen a suggested guide under the content that was vaguely related to the content, but I knew there were guides that were much more relevant to the current guide. There just wasn't a good way to fix this using only tags (without playing the system and adding more tags).
Back to top I asked myself: "How can you add more relevancy between guides?" The simple (yet not so simple) answer I found was by looking at reader behavior. Which guides do readers look at in a single session? After reading a guide, do they look for more information or do they leave? With that idea, I looked at some ways to "calculate" which guides are most relevant based on tags and reader behavior. I couldn't use the current system, flat files, because that would be far too slow. MySQL also wasn't a great option because it's too much data and too many joins in a query, this would be too slow.
Then I found Neo4j, a lightning fast graph database where relationships are a core concept. Using Neo4j, I can quickly and easily find the most relevant guides other readers looked at after looking at the current guide. Combine this with the most relevant guide (based on tags) and I can find the 3 most relevant guides for a guide within milliseconds. This is a great solution, because:
Using this new model to find the most related content, I'm able to help both person and machine. This is what I was looking for since the beginning and it the perfect solution for me at this point in time.
Through 3 iterations I've tried to group content using tags and reader behavior by linking between the different pieces of content. I've used this strategy to signal to crawlers and search engines which content adds context to other pieces of content, while adding to the UX of my readers. By adding tags to the different pieces of content, I've been able to influence the relevant links between pages a little bit, but this wasn't quite water tight. By adding reader behavior to this data model, I've been able to show people the content that I think is relevant to them, but also what other readers, that are reading the same page, find relevant.
In this case, readers are helping each other find relevant content without having to do anything more than read what's available. In an ideal world, I won't have to use tags any more, because the model will be able to figure out what readers should read next. But that's still in the future and might be a post a year down the road.
]]>2020 has been a strange year for us all, but I don't want to linger on the negatives this year has brought. Instead, I'd like to highlight everything that went well for me. I'd like to invite you to do the same to end this strange year on a high. In this post, I'll highlight everything I've learned this year and everything I'm proud of. Quarantine has been a strange time, where all of a sudden I had a lot of extra time to spend on myself and my projects.
At the beginning of the year, I needed to make a few business processes faster that were written in PHP. I ran out of ideas on how to do this in PHP, so I gave Go a try. What a major success that turned out to be. The process execution time was shorten significantly: from 1 week to "just" 3 hours. Learning Golang turned out to have a major impact on my career., but more on that later.
The power and simplicity of Go convinced me to migrate a few other parts of a larger monolith application, written in PHP, to Go microservices.
My trusty CMS, Aloia CMS, got its first stable version this year (february) and has become a powerhouse for my projects. Its simplicity and extendability made it a very reliable base for my portfolio and Plant care for Beginners, which I'll talk about next. The latest version (3.3.0) brings model events when content is saved or deleted. This change alone makes it possible to build all kinds of automation scripts for the consuming Laravel application. For example, this allows me to easily synchronize data between the flat files and Neo4j.
In 2021, I hope to continue to improve Aloia CMS by making the performance even better and allowing for easier extensions for the built-in commands. This change will make it even nicer for developers to use.
2020 was THE year for my side projects! The extra time quarantine allowed me to write many plant care guides for Plant care for Beginners. In May, I enabled Google Adsense and since that point my side project actually started to make some money. At the beginning of December, I switched my ads from Adsense to Mediavine and I hope to stay with them for a while.
In the short term, I will migrate parts of the website from a purely file-based CMS (Aloia CMS) to Neo4j. The data will still be stored on disk, but content suggestions and content aggregate pages will be generated using Neo4j. This makes it much easier for myself to help visitors find the information they might be looking for.
Sander had a great idea for a new side-project: CRO-tool. This project aims to help UX and CRO (conversion rate optimization) professionals to use psychology for their A/B experiments. Together, we've worked hard on getting an MVP off the ground, with success! We've sold a few early-bird lifetime licenses for our tool and we're planning to release around 20 new psychological theories of the next few months to keep adding value to our subscribers.
CRO-tool has taught me a lot about running a SaaS project on which people depend for their day job. The tool has to be as stable as possible at all times and changes should be made with care.
Throughout this year, I've been looking at automating everything that could be automated. This includes deploying all websites I manage. Up to this point, this has always been a manual process. Using Ansible, I've created playbooks to automate the deployments of my websites. This helps to keep my mind at ease and prevents human error while deploying. I'm no longer at risk of forgetting something during deployments, because this has been automated.
Neo4j came on my radar quite late in the year, around mid-November. However, in this short time it has changed my views of what a database should be. A database should shape to your needs, not the other way around. If you're working with complex data, you'll often struggle with many joins in a query to get the data you need. Neo4j makes this much easier and the performance, even while doing many "joins", is many times better than MySQL. I've built a few projects with Neo4j already. One of which is a Go + GraphQL server and a Neo4j database. The other is an extension for Plant care for Beginners, using PHP and Neo4j together. Both have been a pleasure to work with so far.
I've found a new job this year! I've been working at Tubber for the past 5 years and I was ready for the next step. I've found the right place for me at Afosto. At this new job, I'll get to work with Kubernetes, Microservices, and a lot of Golang. I enjoyed working with Go so much that it ultimately resulted in finding a new job where I can use it more often. I hope to learn a lot about the topics I just mentioned in 2021 and using them in production.
Since I've had more time to spend on my side-projects, having a Mastermind group has been an amazing experience. This mastermind group has given me new insights and ideas to use to improve my side-projects and make them profitable. It has been fine working on my side-projects by myself, but having others there as a soundboard and your best critics is great. "Fail fast" is really one of the things that I experience during the masterminds, because you have to to explain your problems and progress. By thinking about it and getting questions, you'll quickly know if your ideas are going to work or if they need a little more polish.
These are some fun statistics for me about 2020:
2020 was a strange year, but it I've been very productive throughout it. I'm very happy with what I've accomplished and I hope to carry on this productive streak to 2021. I don't think I'll start any new projects in 2021, but who knows what will happen. I'm planning on improving what I've got now, instead of adding more to my list of projects. Quality over quantity.
I hope you've had a good year, considering the current state of the world. Even if you haven't had the time or energy to be productive this year, you've made it. Productive or not, I hope you learned something new or found something that brings you energy in some way. This year has highlighted how important mental health is, so work on that first, before trying to be "productive".
]]>Writing a book is something most bloggers or content writers have thought about at some point in time. So have I, so I looked at ways I could reuse a few blog posts as a base for a book. As all of these blog posts are markdown files, I initially looked for a way to turn Markdown files into an e-book by converting it to a PDF. There we're a few options available, for example: themsaid/ibis. This library is easy to use, but also very limited in customization.
Then I was pointed towards Asciidoc, something I wasn't familiar with. In this post, we'll go over what Asciidoc is and how you can use it to generate PDF, EPUB and MOBI documents. Don't let the official Asciidoc website fool you with its simplistic and outdated looks: it's very modern and can do all the heavy lifting, so you can focus on your content.
Asciidoc is a markup language like Markdown, but it has many more features. Like Markdown, you can convert Asciidoc to more common formats like HTML. However, you can do a lot more with it. Markdown is built with basic HTML elements in mind, but sometimes the HTML is too complicated and you'll need to inline HTML code in your Markdown files. Markdown is supposed to be simple.
Asciidoc, on the other hand, has many plugins you can use to parse the files into other formats. Parsing Asciidoc to HTML is just one of the parsers. There are many more available like PDF, EPUB, Docbook, and MOBI. You can use the same Asciidoc file to create all kinds of different formats of your content. Because you can use the same file to create different types of output files, you can create a format of your content that works for your readers, not just what's easiest for you.
In order for you to create a PDF from Asciidoc files, we'll need some software. This is what we'll need to get started:
And the example Gemfile looks like this:
source "https://rubygems.org"
gem "asciidoctor", "~> 2.0.10"
gem "asciidoctor-pdf", "~> 1.5.3"
To install these gems, run the following command in the folder where you keep the Gemfile:
bundle install
In Markdown, the files only contain content, unless you use a separate parse for things like Yaml Front Matter. For Asciidoc files, all configuration is done in the ".adoc" files themselves. This is convenient, because you'll have everything you need in a single place. Let's go over a basic example that we'll call book.adoc.
= Title of your book
:author: Author Name
:email: email@example.com
:revnumber: v0.1
:revdate: 02.12.2020
:notitle:
:doctype: book
:chapter-label:
:sectnums:
:toc: left
:toclevels: 2
:toc-title: Table of Contents
:front-cover-image: image::images/cover.jpg[]
:description: This is the description of your book
That looks very intimidating, but a lot of it is optional and easy to look up in the documentation. Let's go over what each of these tags mean and why you would want to use them. As a little sidenote, you can also specify the author in a different format:
= Title of your book
Author Name <email@example.com>
v0.1, 2020-12-012
However, I like to work with as much verbosity as possible, to make it as easy on myself as I can. You can choose which format you'd like to follow.
Besides the author information, we still have these tags left:
This means we don't want to display the title of the book at the front of the book. Instead of this, we'll display an image as the title page using ":front-cover-image:".
We make sure to mark this document as being a book, rather than a website. This sets a few defaults, like alternating left and right pages.
Here you can specify a prefix for chapter tables. This defaults to "Chapter", resulting in chapter names like this: "Chapter 1. Title of chapter 1". If you don't want the "Chapter" prefix, like me, you can overwrite this behavior and keep it empty. This will now result in "1. Title of chapter 1".
In the previous label (:chapter-label:) we specified a prefix, or rather remove it. But there is still a number in the chapter title. This number is in the chapter, because :sectnums: makes this happen. If you prefer to not have numbers for your sections (1, 1.1, 1.2, 2, etc.), then you can choose to not add this tag.
The tag :toc: tells the Asciidoc parser that we want a Table Of Contents. This will automatically add all chapter and section headers to the table of contents, without any manual work from you. If you prefer to only have chapters in the Table of Contents, you can change the :toclevels: to "1". Having it set to "2" here means that both chapters (1, 2, 3, and 4) and sections (1.1, 1.2, 2.1, 2.2) will be added. You can even set it to 3 or more. This way it'll start adding subsections (like 1.1.1, 1.1.2, etc.) as well. Now that we've specified that we want chapters and sections in the Table of Contents, we can even specify what we want to call the Table Of Contents. In this case, it'll be called: "Table of Contents", but you can name this anything you'd like.
Earlier, we went over the fact that we don't want the title of the book on the first page, but rather use an image as our cover. You can specify this behavior by adding this tag with a path to an image. The syntax for adding images in an asciidoc document is "image::./images/cover.jpg[Alt text here]". It's very similar to Markdown.
You can add a description as meta data to your PDF by specifying a value here. This is an optional step, so you can choose not to do this as well.
If you ever get stuck with the formatting or which tags to use, you can look it up on the internet, because unlike Markdown, there is only 1 standard. This makes searching keywords on Google for Asciidoc very easy.
Asciidoc syntax is quite easy to learn, but there is a lot of things you can do. To keep this post simple, we'll only go over the most commonly used ones and skip the majority of other ones. If you'd like a full list of the syntax, there is a great guide on the Asciidoctor website. They describe everything you might want to use. For now, we'll stick with the most common ones. I'll convert these to HTML tags to give them some context:
As I mentioned, there are a ton more, but these are the most commonly used ones. If you noticed, an H1 is two equal signs instead of just one. One equal sign is the title of the book, this is why it's all the way at the top of the configuration section.
Now that we have the configuration and we know the basic syntax, let's write a simple "book" that we can convert to a PDF in the next section. In Asciidoc, you write the content in the same file as the configuration from earlier, but you can also choose to include separate files and write you content in there. You can start writing content under the configuration from earlier, just make sure to leave an empty line like so:
# ... rest of the configuration
:front-cover-image: image::images/cover.jpg[]
:description: This is the description of your book
== Title of chapter 1
Instead of writing the content in the main book.adoc document, you can also include chapters by doing this:
# ... rest of the configuration
:front-cover-image: image::images/cover.jpg[]
:description: This is the description of your book
include::chapters/chapter-1.adoc[]
Now that we have some content, we can generate a PDF from this "book".
If you remember from the prerequisites, we installed the Ruby gems we need to generate a PDF from an Asciidoc document. These Gems are going to help us to create a PDf with a single command. We've named the main file of our book "book.adoc". We're going to export this file to "book.pdf" using the following command:
bundle exec asciidoctor-pdf book.adoc
There are several things you can specify while using this command, for example the output filename. If you want to change the output filename, you can do so by passing a flag to the command:
bundle exec asciidoctor-pdf book.adoc --o my-amazing-book.pdf
As you can see, it's a simple and readable command. Now you'll have your PDF file and you can open it to view your new book.
Asciidoc has a lot of features, you can change almost everything about it, like the styling of your PDF. I won't go into details about styling this in this post, but if there is interest (contact me), I could write a post about this. Instead, I will leave you with a link to the best resource I can find for styling your PDF and this command you could use to apply your styles:
bundle exec asciidoctor-pdf book.adoc -a pdf-style=themes/light-theme.yml -o my-amazing-book.pdf
As you can see, you can specify a stylesheet to apply to your PDF. The best reference for how to write this can be found on GitHub.
Markdown is a simplistic markup language to focus on the content instead of the individual elements, but it's also very limited. You can use Asciidoc to accomplish the same simplicity while still being able to create more complex structures. In this post, we went over how you can convert an Asciidoc file to a PDF and optionally add some styling to it. The learning curve can be quite steep, from my personal experience, but once you understand how it works you'll have superpowers.
]]>It's probably very clear by now, but Ansible is one of my favorite deployment/automation tools. It's a tool to very easily create reproducible scripts that you can use for all kinds of different purposes. In case you're interested to find out more about what I've done with Ansible, go have a look at any of the links below.
Ansible is usually used for server orchestration, but you can do so much more with it. What about setting up your development environment to be perfectly suited for the project(s) you're working on...completely automatically? You won't have to deal with setting up your environment from scratch on new systems any more and you can share your Playbook with colleagues to help them quickly get started on projects.
These two use cases might be enough to convince you of the benefit of automatically setting up your development environment, but in case you need another reason: In case you mess up, I mean really mess up, you can run your Playbook again and continue as if nothing happened. Let's get into what you need to automatically set up your development environment and how to do this.
Installing Python and Ansible on Debian based systems is a single command:
sudo apt-get install python3 ansible
You can now start writing your playbook, as it's called in Ansible.
Creating a playbook is essentially creating steps in a process and defining what those steps do. You can use several modules for this, for example: git, docker, apt, service, and shell. These modules are included in Ansible and you can make use of these to simplify and standardize the steps in your playbook. If you want to, you could do everything using shell commands, but it's easier to maintain if you use a module that's built for what you're trying to do. This way you can look up what a step does instead of deciphering commands.
For example, we might need to install git, docker, and python3-pip. In that case, you could use the "apt" module like so:
- name: "Install required software"
apt:
name: "{{ packages }}"
state: present
become: true
vars:
packages:
- git
- docker
- docker-compose
- python3-pip
If you want to install this using shell commands, this would look like this:
apt-get install git docker docker-compose nodejs python3-pip
We specify that the packages should be "present". The apt module knows what this means and installs these packages if they're not installed yet. This means that you could run this playbook multiple times, but it'll only install the packages if they're not yet installed on the system.
Since we also need Docker for our development environment, it's useful to start the Docker engine when you boot your system. You can often do this using the following command:
sudo systemctl enable docker && sudo systemctl start docker
But, you could also use the "service" module to make this step a little more abstract and less vulnerable to potential changes:
- name: "Starting Docker"
become: true
service:
name: docker
state: started
enabled: yes
With this "service" module, we're making sure that we start Docker when the system boots up, but also that it's already running right now. You can use whichever version of installing software you prefer, but I like to stick as close the the included modules as possible. This way, when something changes under the hood, I won't have to update my playbooks.
Now that we have the required software on our machine, we need our code from Github. You guessed it, there is an Ansible module for that. This is what it looks like:
- name: "Pull our Git repository"
git:
repo: git@github.com:roelofjan-elsinga/portfolio.git
dest: /var/www/html/roelofjanelsinga.com
version: master
accept_hostkey: yes
This assumes that you have the permission to pull from the specified GitHub repository. As the repository I've specified there is this website, and it's public, you can pull that without any issues. You can automate setting up SSH keys with GitHub as well, but I won't go into that in this post.
In the previous step, we've gone over some basic steps to install software we need to be able to run our application. You might also have seen that I included python3-pip in that list of applications. You might not need this for your software, but we will need it for the Docker module that's built into Ansible. For this example we want to launch a simple docker environment with docker-compose. First, let's look at the docker-compose.yml configuration:
version: "3.8"
services:
nginx:
image: nginx:1.19.2-alpine
volumes:
- ./:/var/app
This will map the current directory in /var/app of the Docker container. As you can see in this example, we're going to need an nginx container. We can pull this from Docker Hub by using Ansible:
- name: "Pull the Nginx image"
docker_image:
name: nginx:1.19.2-alpine
source: pull
And now you'll have the Docker image on your host and you can launch your docker-compose environment with Ansible:
- name: "Launch the docker-compose environment using shell"
shell: docker-compose up -d
You can also use the docker-compose module for Ansible in case you have more detailed needs. Since I'm only bringing the environment up, I don't feel the need to install a community plugin.
So we have a basic Playbook, but now we need to configure hosts to run this playbook on. Since we're only interested in running this Playbook on our local machine, it's easy to specify your host: it's called "local". Let's add that to our Playbook and let's see what the finished Playbook looks like:
- hosts: local
tasks:
- name: "Install required software"
apt:
name: "{{ packages }}"
state: present
become: true
vars:
packages:Setting up your development environment with Ansible is a great way to save yourself and your colleagues headaches. Creating an Ansible Playbook helps you to keep everyone on your team in the same environment and makes switching machines an easy task, not a chore.
- git
- docker
- docker-compose
- python3-pip
- name: "Starting Docker"
become: true
service:
name: docker
state: started
enabled: yes
- name: "Pull our Git repository"
git:
repo: git@github.com:roelofjan-elsinga/portfolio.git
dest: /var/www/html/roelofjanelsinga.com
version: master
accept_hostkey: yes
- name: "Pull the Nginx image"
docker_image:
name: nginx:1.19.2-alpine
source: pull
- name: "Launch the docker-compose environment using shell"
shell: docker-compose up -d
This is our simple, yet complete playbook and we're now ready to run our playbook and see our local machine being setup from a fresh install of a Debian-based distro into a development machine, exactly like your colleagues have it as well. That's really the power of Ansible here: when you make a change to your playbook and share this with your colleagues, they can run your updated playbook and it'll setup their system in the exact same way as you've intended it. We can run this playbook using a single command:
ansible-playbook playbook.yml
This will now go through all the steps you've specified in your own playbook. If you're not satisfied with the results, simply change your playbook and run it again.
Creating a playbook is often a process with a lot of little things to change here and there to get it just the way you want. But even though in seems like a lot of work, it can save you countless of hours in the future. When you move to a new machine, simply run the playbook again and you can continue where you were, without having to tinker for hours to get everything like you had it.
]]>When you're using Home Assistant for your home automation and you've got a few MQTT devices you might want to create simple switches for your devices. However, if you're like me, this simple task turned out to be a very tough task. This post is as much for you as it is for me, because I forget how to do this every time and each time I go through this it takes me hours to get going. In this short post, we're going to do 3 things:
There are a few prerequisites when you go through this process:
Let's get right into it, so you can get back to building amazing automations.
Defining your devices as a sensor is optional and doesn't have anything to do with creating a simple switch in Home Assistant, but it can allow you to create triggers based on the state (on or off) of your MQTT device in the future. So if you want to do this, you can go through this step, otherwise you can go to step 2.
To register your MQTT device as a sensor in Home Assistant, you need to define it in the configuration.yml file. Let's look at a basic example:
sensor:
- platform: mqtt # This is an MQTT device
name: "LED Switch 1" # Choose an easy-to-remember name
state_topic: "home/office/led/get" # The topic to read the current state
After adding this sensor information, you can access the state of your MQTT device as "sensor.led_switch_1", or whichever name your specified: "sensor.whichever_name_your_chose". You can use this entity as a trigger to automate other things in the future.
Now the most important step of this whole post, defining an MQTT device as as switch in Home Assistant. To do this, open the configuration.yaml file again and add the following configuration:
switch:
- platform: mqtt # Again, it's an MQTT device
name: "LED Switch 1" # Choose an easy-to-recognize name
state_topic: "home/office/led/get" # Topic to read the current state
command_topic: "home/office/led/set" # Topic to publish commands
qos: 1
payload_on: 0 # or "on", depending on your MQTT device
payload_off: 1 # or "off", depending on your MQTT device
retain: true # or false if you want to wait for changes
The qos depends on your situation, but in short it means this:
If you're building a simple switch, you can choose 1 or 2. Not a lot of messages will be sent and you want to make sure that your MQTT device received the message. You should now restart Home Assistant to make sure the configuration is loaded. Do this by going to: Configuration -> Server Controls -> Restart.
Now wait until your instance comes back online and you can move to the last step.
Now that we have registered your MQTT device as a switch, we can create a visual element for it on your dashboard. You can modify your dashboard by clicking the three dots at the top right of your dashboard and click "Edit dashboard".
If you've never edited your dashboard you'll get a message asking if you're sure you want to edit your dashboard. Just say yes and you'll have a screen like the screenshot below.
Click the orange button on the right to add a new element. You'll get an overview and we're interested in either the "Entities" or "Button" card from the screenshot below.
If you want a button that lights up when your MQTT device has the "on" state and is off when the state is "off", than choose the "Button" card. If you just want an on/off toggle, choose the "Entities" card. By creating a switch in step 2, you should now be able to easily create a visual element for your MQTT device and toggle its state by pressing a simple button in your dashboard. Like in the screenshot below.
And when you toggle the switch or press the big lamp in your dashboard, you'll trigger the "on" state of the MQTT device. This will automatically update the state in your dashboard like the screenshot below.
If you registered your MQTT device as a sensor in step 1, you can now trigger other automations based on the state of your MQTT device when you toggle your switch or press the button. I hope this helped you, I know this cost me hours to figure out by myself, so I'm already saving myself hours next time.
]]>
As developers, we deal with fixing bugs every day. If your error messages are clear, it's often quite easy to fix these issues. Fixing issues for a single application is also not the most difficult thing in the world. You'll have to keep track of a limited amount of moving parts that could potentially go wrong. But what happens when you're developing an entire platform consisting of multiple services? How do you keep track of potential issues between them at scale? And how do you deal with discovering issues if different services are spread out over multiple nodes?
When dealing with larger infrastructures, you're going to need good monitoring solutions you can very easily deploy on multiple nodes at the same time, using reproducible steps. One of the monitoring solutions that fit this description perfectly is Netdata.
In this post, I'm explaining why Netdata is a great option for your infrastructure monitoring and how it has helped me to fix a major problem with my infrastructure within the first hour of installing it onto some servers.
Netdata is monitoring software for servers, but really anything that runs Linux. So you could also install this on your workstation and keep track of your system resources using Netdata, instead of top or htop. Netdata comes with a built-in dashboard that you can access at https://localhost:19999 on your local machine and you'll see all system information you could ever want. However, when you've deployed this on multiple nodes, you can feed this data into Netdata Cloud and all of them in your dashboard at the same time. You can see all resource usage of your entire infrastructure at a single glance.
Netdata is very easy to install and comes with a few nice features, which makes it perfect for infrastructure monitoring. You can install the Netdata client on your devices (including all nodes in your infrastructure) and connect it to a single dashboard: Netdata Cloud. This is a free dashboard you can feed all of your nodes' information into to create a clean overview of your servers and their health statuses. This dashboard shows you CPU usage, RAM usage, inbound/outbound bandwidth, and a lot more information that you could use to figure out the health of your nodes.
Information is all great, but it's not good enough if deploying Netdata to all your servers automatically isn't possible. Luckily, you can install Netdata using bash scripts, and adding each individual node to your Netdata Cloud dashboard is, yet again, a bash script. This means you can automate this process using, you could've expected this, Ansible. Ansible helps you to run the same script on multiple nodes automatically, in a reproducible way. This is a perfect match with Netdata, because the installation is easy and reproducible. Adding the installation scripts for Netdata to Ansible ensure that you're always able to monitor the status of your nodes, as soon as they've launched.
Running a few bash scripts is quite easy, just make sure you know what you're executing when you run bash scripts from the internet.
Installing the client is a single line, which you'll need to run on your target node (the server):
bash <(curl -Ss https://my-netdata.io/kickstart.sh)
After you've created an account on Netdata Cloud, all you need to do to link the Netdata client to your Cloud account is run a script similar to this on your target node:
sudo netdata-claim.sh -token=your_token -rooms=your_room -url=https://app.netdata.cloud
You'll be given a token and room identifier in Netdata Cloud, so don't worry about copy/pasting this line above. Now your server will automatically be connected to Netdata Cloud and you'll be able to see all kinds of metrics. Below are some of the metrics you might see in your dashboard:
The data transfer is done through SSH tunnels, so you can be rest assured that it's safe.
Now that you know what Netdata is and have gotten a little overview of what it can show you, let's go into how it helped me to fix a major (hidden) issue within the first hour of deploying it in my infrastructure.
As a little back story, one of the servers is running an Apache Solr instance to serve as a search engine and it has been quite slow over the past few months. Apache Solr has a nice dashboard that shows the memory usage and the JVM memory usage, as it's built using Java.
This was always stuck around 90% of memory usage, so I assumed that the performance issues had something to do with RAM limitations. This let me down a path of upgrading the server within AWS to give the Solr instance more memory to work with, but this didn't have the expected results.
After installing Netdata on this server, I saw that the CPU was almost constantly hitting 100% usage, which was strange as the server wasn't under heavy load at the time. It seemed that RAM limitations weren't the issue after all, but the CPU was having a though time. So after a bit of searching around on our favorite search engine, I found this Jira issue. It pointed out that the version we were running (7.7.1) was causing memory issues and that it was patched in the next version (7.7.2). So after upgrading the Apache Solr server to the latest 7.7.x version (7.7.3) the problems were resolved. The search engine was at least 3 times quicker, even under heavy load, while indexing large amounts of documents.
It would've taken me much longer to solve this issue without Netdata, because the memory statistics in the Apache Solr dashboard were misleading, while the usage statistics in Netdata showed the real cause of the problem. You might have been able to solve this issue using top or htop as well, but then you'd already need to know what you were looking for. I was looking in the wrong place and Netdata helped to point me in the right direction.
Using Netdata to visualize the health of several nodes helped me to find an issue that was previously hidden. The monitoring solutions I had available to me all pointed to RAM limitations, while Netdata pointed me to CPU limitations. This different perspective helped me to find and fix a major issue, that has been around for months, within the first hour of installing the software. It has contributed to a more stable and much faster search experience on the platform.
]]>If you've developed any applications that run on servers, you have probably heard of Go (Golang). It's a compiled language with very little moving parts, so you won't have to spend a long time to get to know the programming language and start building applications with it. Besides being relatively easy to learn, it's also known for being lightning fast, because the code you write is compiled into binaries, which can run on your system natively.
Those two aspects, quick to learn and amazing performance, compelled me to pick it up in January of this year (2020). There were a few things that I had been optimizing in PHP for months and just didn't get the performance improvements I was hoping for.
Once I figured out how to use Go, I had 2 applications running in production within 2 weeks of picking it up. Now, 9 months later, I've sprinkled some Go here and there to improve performance significantly on multiple occasions.
These are some of the projects I've developed in those 9 months:
In this post, I'll go over these 3 projects and explain what I've learned from them and why they came into existence.
CLI applications are a great way to pack a lot of functionality into a little package and perform tasks very effectively and predictably. The CLI application to index documents in Apache Solr didn't start out that way. The 2 applications I mentioned in the intro were designed to each take over a small part of this process. I used both applications as filters to reduce the amount of data going through PHP. After merging these applications, it turned out that 60% of the entire process was now in Go and the performance boost was significant.
As the new execution time of this script was mere milliseconds instead of seconds, an unexpected bottleneck showed up: HTTP latency. The latency was "only" 10 milliseconds, but this is quite significant when the total execution time is now 5 milliseconds instead of 5 seconds. The fact that I was bottlenecked by HTTP, motivated me to migrate more from PHP to Go. Over the course of a month, I had migrated the entire data processing script to Go.
Once again, I ran into a bottleneck: PHP was still responsible for retrieving data from the databases, send it to the Go server, and index the data in Apache Solr. As it's PHP, these steps were executed sequentially, rather than concurrently. But Go was built for concurrency, so why not just migrate everything and let PHP delegate tasks instead of performing them? That's exactly what I did. As I no longer needed complex data from PHP and I didn't have to return any data, I no longer needed a web server and converted the application to a CLI tool.
Months later, this CLI tool is still going strong and works like this: PHP executes the binary with basic input arguments, Go retrieves all data from the database, processes it, and then submits it to Apache Solr. Rather than doing generating documents sequentially, it's done concurrently.
I learned to identify parts of larger processes that could be improved by outsourcing the execution to Go. Identifying these parts was the toughest challenge, harder than learning Go and building the applications.
I've written several blog posts about this topic, in case you'd like to get more in-depth:
GraphQL is an amazing abstraction layer for your API enpoints. You can abstract many different systems without exposing these to the consumer of your API. Another benefit is that you can point your API consumers to a single, self-documenting, place to retrieve all of their data instead of making them fetch their data from 4 different places and combine it on their side. GraphQL serves as an API gateway.
I've built a GraphQL server in PHP in the past and this worked quite well, but the performance wasn't always great when you're fetching a lot of data. As I'd just finished the CLI Application and saw the performance boosts there, I decided to build a GraphQL server in Go. The idea was to migrate the slowest parts of the GraphQL server in PHP first and see where to go from there.
The biggest challenge was the different way these two GraphQL implementations are positioned into the architecture. The GraphQL server in PHP is built into the platform, but the GraphQL server in Go is a standalone API gateway that utilizes existing API endpoints to retrieve data.
Now, after having worked with the standalone API gateway for a few weeks, I'm glad to say that this approach is working well. Managing dataflows is much easier if you're treating the GraphQL server as an API gateway instead of a query language for a database.
I've written more about this project here:
This utility web server looks a lot like the two filter applications I developed in the first two weeks of using Go. It's processing larger amounts of data much more efficiently than PHP could and returns only that which PHP needs to perform its tasks. This web server improved the runtime of this particular process from 10-20 seconds to 3-4ms.
A summary of this application is this: The application takes a list of existing items from the database and a list of items retrieved from an API endpoint. The list of items from the API endpoint is the source of truth and the application needs to determine what to do with all of the items.
These are the actions it recommends for the PHP script:
This isn't very difficult to do by itself, but if you have two large lists, it's a slow process. This was never a problem in PHP, because these lists were usually limited to 10 items each. But when they started to grow to hundreds each, sequential execution of this became problematic. Doing this cycle once, twice, or ten times is no problem, but it adds up if you need to do it thousands of times. Go is able to do this much faster and instruct PHP what to do with these items in milliseconds, rather than seconds. The calculations are still done sequentially, because the delegation is still done in PHP, but the performance boost from seconds to milliseconds is a huge improvement.
If the past tells me anything, this entire process might be migrated to in its entirety Go by next year.
I started learning Go in January (2020) and it has been an amazing journey for the past months. I've learned to write entire applications in a few days, rather than a few weeks or months. The simplicity of the language and the performance boosts it brings are very compelling reasons to start learning it.
Over the past months of working with Go, I've developed multiple notable applications, that have run in production at some point, of which most are still around. Outsourcing some of the more resource intensive processes from PHP to Go has contributed to faster applications, easier deployments, and overall more stability. Memory issues and sitting around waiting for something to finish are in the past now.
The biggest challenges weren't learning Go or developing the applications, but determining what and how much to migrate to solve the problems. The choice to migrate something is always based on the effort it takes versus the headaches it resolves. So far, more headaches have been resolved than the effort it took.
]]>API gateways are great for development teams because they expose the data you need for all kinds of different purposes in a central location. There are a few great REST API gateways out there, like KrakenD, but what if you wanted to go in a different direction and choose GraphQL for your API infrastructure? Well, that works out perfectly, as it's one of the goals of GraphQL: Abstracting many different services into a single place and allowing the developers very fine-grained control over the data they need.
In this post, we're going to look over a GraphQL implementation, which keeps the previous sentence in mind: Abstracting existing REST API Endpoints into a fast GraphQL server. To build the GraphQL server, we're going to use Golang: It's fast, it's memory efficient, and provides just enough tools, but not too many. The GraphQL package we'll use is github.com/graphql-go/graphql. This package is very closely aligned with the JavaScript implementation graphql-js. This makes it a perfect candidate because you'll be able to follow JavaScript tutorials and be able to port this to Go.
To show how you can abstract an existing REST API Endpoint in GraphQL, we're going to need an example project. I've created an example project at github.com/roelofjan-elsinga/graphql-rest-abstraction. You can use this to follow along in this post, as I will go over different parts of the GraphQL server and explain what's going on.
The entry point of our GraphQL server is main.go. Here we specify two resources in our GraphQL server: users and user.
We intend to use a dummy REST API service to fetch JSON data for all users and also a single user. The "users" resources will be used to fetch all users at https://jsonplaceholder.typicode.com/users, while the "user" resource will be used to fetch a single user by ID from https://jsonplaceholder.typicode.com/users/1 or any other user available to us.
Now that we have a REST API we can use, we can create a resource to be able to fetch this data through a GraphQL resource. You can find this resource in queries/users.go:
Here you'll find a method "fetchUsers", where we call the REST API endpoint and convert the data into a Go struct, which is located in models/user.go. Our field "Users", will return the User slice from "fetchUsers".
In the "users" field declaration we specified the type we expect to receive from this GraphQL resource: graphql.NewList(userObject). We told GraphQL we're returning multiple users. The userObject is one of our GraphQL resources and you can view it in full here. It's too much code to inline here, so I've linked it up to the exact line you need in the source code. The userObject itself also contains fields and nested objects (address and company). These nested objects are linked to the exact line as well. As you can see, objects can be nested within nested objects.
Now that we've specified all fields and we can retrieve data from the REST API, it's time to give our new GraphQL resource a try. Follow the setup steps (there are only 4, and they're easy) and try to execute the following GraphQL query:
You should now see all of your users appear in the response, but only the fields we've specified in our query:
I've redacted the rest of the users to not make this snippet too long. As you can see, only the requested fields we're returned, as we expect from GraphQL.
Now that we've seen we can retrieve all users, we'll also go into retrieving a single user. The userObject is the same as we've looked at before, so I won't go over that again, but the field declaration for "user" has changed a little bit compared to "users" and so has the query. Let's look a the field declaration first. It's located at queries/user.go and looks like this:
There are three main differences:
I mentioned that the GraphQL query now has changed as well, so let's look at what it looks like:
This query needs a user_id to be submitted (of type Int!), so we can do that using {"user_id": 1}, or whichever user_id you want to retrieve from the API endpoint.
This query results in the following response:
As you can see, we now only have the user with ID of 1.
This guide has shown you how to can create an API Gateway using GraphQL, to create an abstraction layer in front of your existing REST API endpoints. There are a few things missing, like Authentication, a DataLoader for efficient data fetching, but this is a simple example to show how this works. Using this method, you can piece by piece expand your API Gateway in GraphQL and Go to cover your entire list of REST API endpoints, without disturbing your existing customers. Your existing customers will still be able to fetch data from your REST API, but over time you can help them migrate to your easy-to-use GraphQL API Gateway.
]]>Ansible is a server orchestration tool that you can also use to perform workflows on remote machines in a predictable and repeatable way. In a previous post, "Automating Laravel deployment using Ansible", I've lined out how you can deploy an application using your GitHub username and a user token using the Ansible Vault. However, you can also do this using SSH, making sure your server only has pull-access to your application repository. This extra layer of security is quite easy to accomplish, so in this post, we're going to look at how to do this.
In this blog post, we'll go over the following steps to use the same configuration as before, but with SSH instead of user tokens or passwords:
You can use the configuration from the previous blog post to deploy your application, the only difference in this post is that you won't need the Ansible Vault, so you can remove the "vars_files" key from the configurations mentioned in that post. Along with that, you'll need to use the SSH address as the "github_repo_url" value: git@github.com:your-username/your-repository.git.
Generating an SSH key on your server is a quick process and involves a single command:
ssh-keygen -t rsa -b 4096 -f ~/.ssh/id_rsa
Let's break this down:
When you're specifying a filename, make sure the file doesn't already exist. This will result in the existing key being overwritten, which could break other SSH connections you might have. If the file already exists, choose a different name: ~/.ssh/your_repository_name, for example.
If you do end up using a filename that differs from "id_rsa", you'll need to make an additional change:
Host github_server
Hostname github.com
IdentityFile ~/.ssh/your_repository_name
IdentitiesOnly yes
and change the SSH address for your repository in Ansible to git@github_server:your-username/your-repository.git. Notice how we're not using github.com anymore and are now using our custom configuration: github_server.
Now that we've generated a private and public SSH key on our server, we can add this as a "Deploy key" to our GitHub repository. Deploy keys are SSH keys that give the other machine access to a single GitHub repository. You, as the repository owner, can even specify if the remote machine has push privileges. By default, the Deploy keys only have pull access, which is exactly what we want for deployments. We don't want push privileges and we don't want to give the remote machine unlimited access to our entire GitHub account.
To get your public SSH key from your server, run this command:
cat ~/.ssh/id_rsa.pub
If you used a custom name for your SSH key, use that instead. This could be:
cat ~/.ssh/your_repository_name.pub
Notice the .pub behind the SSH key, this is your public key. You can give this out to others, just make sure to NEVER give out your private key (the file without the .pub at the end). The contents of the private file need to remain a secret at all times.
You should now see your public key in the terminal, starting with "ssh-rsa". Copy the entire key, including ssh-rsa and the machine name at the end. This is all part of your public key.
Now, go to your Repository on GitHub and navigate to the "Settings" tab -> Deploy keys -> Add deploy key.
Give your Deploy key a recognizable title, like "Production server", and paste the public SSH key in the "Key" field. Don't check the "Allow write access" checkbox unless you really need to. Now click "Add key" and you should see your newly created Deploy key in your overview.
Now that you've connected your server to your GitHub repository, you can make some changes to your application and commit your changes. When you're ready to deploy your changes, execute your Ansible Playbook, and see your application being deployed using your new SSH setup. You can verify if your SSH key was used to pull your changes by refreshing the "Deploy keys" overview in GitHub. Your deploy key should now be green instead of gray, and it should have a message saying "Last used within the last week".
To execute your Ansible Playbook, you can use this command:
ansible-playbook your-configuration-file.yml
Deploying your applications from GitHub using SSH doesn't have to be difficult and you don't have to give your remote machine access to your entire GitHub account either. In this post, we went over using SSH through Deploy keys in GitHub to only give your remote machine pull access to a single repository to deploy your application safely and easily.
I've written this blog post to share my recent findings of deploying applications using Ansible. I could have missed a few things here and there, as I'm new to this myself. New findings will always be addressed in new blog posts and inaccuracies will be fixed in this post to make sure I'm not spreading misinformation. So if you've found a mistake, please let me know and help me to spread quality information to fellow software engineers.
]]>If you've been working on the performance of your websites for a while and haven't tried service workers yet, keep reading. A Service Worker is a script that runs in your browser which helps to optimize asset loading on your website, even allowing the caching of assets in the browser for offline usage. This is not an in-depth tutorial about the ins and outs of service worker, but rather an insight in the benefit the service worker brings to your website's Lighthouse performance score.
For this post, I'm using Lighthouse, because this checks the performance at this moment in time, rather than the performance over a longer period of time. The following screenshots have been taken on the same day. The first set of screenshots represent the website without a registered service worker and the second set of screenshots were taken when I registered the service worker. It's the same website, the only difference is the service worker.
An insight wouldn't mean anything if we don't have a before and after situation. In the following two screenshots you see the before screenshots for the mobile and desktop scores.
As you can see, the score for desktop was quite good already and didn't need a lot of improvement. However, if we look at the score for mobile there is different situation.
The score for the mobile version wasn't great and really needed some improvement, especially since most traffic (80%+) to this website is on mobile devices.
As the website I've run this test on is built with Laravel, I use Laravel Mix for compiling Sass and other assets. Laravel Mix has a plugin to generate a service worker: laravel-mix-workbox. With this extension, you can very easily generate a service worker for the compiled assets.
This is an excerpt of the configuration I use to generate the service worker:
The most important thing to not here is that you need to include the "webpackConfig" section. If you don't do this, the Service Worker will attempt to cache your assets with an extra leading slash: "//css/style.css". This will throw errors and will cause the Service Worker to not launch because it won't launch if there are any errors. So by adding "webpackConfig" with the new publicPath, you solve this issue.
You can use this same configuration if you're using Webpack to bundle and compile your assets. Simply replace "generateSW" with "new GenerateSW" and include it in the plugins section of your webpack.config.js file.
Now that you have the sw.js file, you need to include it in your webpage:
Now that we have the service worker installed on your webpage, we can check our Lighthouse performance score once again. These are the screenshots for the Lighthouse performance scores on mobile and desktop after including the service worker.
As you can see, the score is higher than it was. It's a nice boost to our score, but the desktop version never needed the extra performance to begin with. Mobile on the other hand has made a massive jump:
The mobile score is now high enough to be green, which was my goal when I started this. The Service worker has caused the score to jump quite a bit and load the static assets much more efficiently.
To achieve this score, having a poorly optimized website and only adding a service worker isn't enough. Before adding the service worker, I'd already carried out a variety of different optimizations:
These factors all played a role in the final score, but it doesn't take away that the service worker still provides a nice performance boost.
Adding a service worker to your website can massively improve the Lighthouse performance score, your UX, and even results in a better SEO score. So if you're able to do this for your projects and you're looking to get some extra performance out of your website, including a service worker is one of the quickest performance boosts you can get.
]]>As you might have read in earlier posts, my blog is built using Laravel and my own CMS Aloia CMS. This CMS is as flexible as I'd like it to be and I can make changes by extending behavior in PHP. I've migrated everything to Aloia CMS last year, as it made creating content very easy and it lowered the barriers to write content rather than have to work around current systems. Aloia CMS allows me to shape my own workflow and not shape my workflow to fit a certain system.
As the CMS conforms to my own workflow, I was able to add all kinds of hidden automation in the system over time. The first one was sharing my content with other platforms through an Atom feed. The RSS feed followed quite quickly, as not every platform was capable of parsing an Atom feed. I could effortlessly syndicate my content to several platforms, including MailChimp, Pinterest, and Dev.to. Unfortunately, there are still a few platforms that I use to share my content that doesn't support RSS or Atom feeds. One of these platforms is LinkedIn. As I'd like to focus on the content rather than the process of syndicating the content, I set out to automate the process of sharing my blog posts on LinkedIn.
Creating content is something I enjoy a lot, but having to share this with all the platforms can become tedious over time. To overcome this burden, I wanted to automate syndicating this content. As LinkedIn doesn't support RSS or Atom feeds to publish the posts, there was another way out: API endpoints.
LinkedIn has API endpoints that you can use to publish articles, text posts, and images from any application. As I've built automation and API connections many times before, this was not much of a challenge. LinkedIn uses the standard OAuth 2.0 authentication method: Redirecting the user to LinkedIn to allow your application access to their information, receiving an authorization code, and then requesting an access token to interact with LinkedIn as the user that allowed your application access.
As the documentation is a bit of a mess at times, I'm going to list the exact pages I've used to get this to work. This is not a tutorial, so I won't do a step-by-step process in this post, but I will nudge you in the right direction.
Of course, you'll need to handle any errors during this process and you should set up some system to refresh the access token before it expires.
The LinkedIn API seems to contain a few technical choices that aren't standard practice these days and that means it requires a few different things that you wouldn't expect. These quirks include using several unexpected headers:
The POST request to share your content to LinkedIn is easy enough to copy/paste from the examples about Creating a Share on LinkedIn.
Now that the API connection is set up, it's just a matter of sharing the blog posts to LinkedIn. There are many ways to do this, like triggering jobs or events to set your automation in motion. However, as sharing the blog posts instantly is not very important to me, I set up a cron job to do this for me. My blog publishes scheduled posts every day at noon if a blog post one is scheduled for that day. If I didn't schedule any posts, I can still manually publish a post. The cron job looks at the publishing dates of all of my posts and figures out which one was published on that day and automatically shares it through the API to LinkedIn at 18:00 (6 PM). This way I don't have to think about publishing anything to any platform manually anymore.
Automation takes care of all of the repeatable actions I usually do manually and frees me up to focus on writing content instead. This goes hand in hand with my philosophy about using my own CMS, focus on creating content, not on everything around it. Any obstacle that you might face while creating content is a potential deterrent to stop creating content.
Sharing content to many platforms automatically helps you to get the word out about your expertise and it frees up the time you'd have normally spend on manually sharing your content to those platforms. A lot of platforms support RSS or Atom feeds to automatically publish your content but not all of them. LinkedIn, for example, doesn't support syndicating content through RSS feeds, but it does have API endpoints to be able to automate this. In this post, I went over the steps I took to set up publishing my blog posts to LinkedIn automatically through API endpoints and cron jobs.
I automate everything that's repeatable and might be an obstacle in the content creation process. Roadblocks are the potential to stop creating content and that's why I go out of my way to solve those roadblocks and make the process as smooth as possible.
If you're looking to do this for yourself or your business, you can always contact me and set up a plan of action.
]]>We all know that we should use properly sized images instead of using full-size images and making them smaller with HTML or CSS. Full-size images are larger in size, sometimes megabytes instead of a few kilobytes. When you're loading a page, this makes the load times take much longer, because all that information has to be served to the client. Using properly sized images, you're only serving what you need to. This could reduce your giant 5 megabyte image down to just a few kilobytes. Your page loads much faster, especially on mobile devices and Wi-Fi.
Only serving smaller images is half the battle though, there is still more you can do. Most latest browsers now support WebP images. This is a modern image format that is much smaller than png's and even jpg's. What if you could automatically resize your images to the proper size and serve your images in the smallest possible format to your clients? Well, there is a solution for that: Gumlet.
Using Gumlet to serve your images is easy. When creating an account, you can add a new source. A source in this case is a website. If you're hosting your own images, all you need to do to create a source is:
What this does is proxy the request you make to Gumlet to your own webserver. In the next step, we'll go over how this works. If you're not hosting any of your own image, you're using S3 for example, then you can select another source in the "Source type" dropdown and complete the steps from there.
In the previous step you've set up your image serving subdomain with Gumlet. In this step, I'm going to show you an example of how Gumlet serves image for your website. Imagine you have the following URL for an image:
<img src="https://my-domain.com/images/banner.jpg"
This image is 1200px by 800px.
To be able to take advantage of the compressing and resizing of the image we first need to determine what the size of the image ideally should be. As an example, we can determine that the image should be 300px by 200px. To tell Gumlet we want the compress image in that format, we can update the URL of the image to this:
<img src="https://my-domain.gumlet.io/images/banner.jpeg?w=300&h=200"
This will request the image from the Gumlet CDN. Gumlet, in turn, will fetch the image from the "Base URL" you set earlier, using the path you specified. This means Gumlet will request the image from your specified location, resize it, and compress it. Most likely you'll save more than 60% on file size and all of your images will now be of the webp format.
This is the result of using Gumlet for a week on Plant care for Beginners:
Now that you've set up Gumlet and update the image sources, you'll now see much faster page loads and properly sized images on your website. This relatively simple improvement could give your SEO a serious boost, especially if your website has a lot of images and has been slow because of this.
]]>Running tasks in Ansible can be done in different ways and this can be very confusing for those starting out with automation and server orchestration. In this post, I'll explain the difference and why you should use one or the other for certain situations. If I had this post when I started with Ansible it would've saved me hours of researching, so hopefully this helps you.
Tasks are...well tasks. They are specific to a workflow, called playbooks, in Ansible. If you read my post from last week, Automating Laravel deployment using Ansible, you would have seen the configuration I shared at the bottom of that post. This configuration used tasks. These tasks are specific to that specific playbook and can't be shared with other playbooks. This is something you should use roles for.
An advantage of using tasks rather than roles or handlers is that you have the details of the tasks in the same file as the entire playbook. You can quickly see what your entire playbook will do when you execute it. This is great for smaller playbooks, like the playbook I shared, but gets tough to understand when the playbook gets longer. This is where roles might offer a way out.
Roles are a collection of tasks that are grouped under a common name. If we use the configuration that I shared last week, we can convert that into a playbook with roles, rather than tasks. This would look like the configuration below.
This configuration has the role "deploy_laravel_app". To understand what's happening here, I need to give you the folder structure:
âââ deploy_laravel_app
â âââ handlers
â â âââ main.yml
â âââ tasks
â âââ main.yml
âââ playbook.yml
Here you can see the "playbook.yml" we're using above and a folder called "deploy_laravel_app". The folder name determines the name of the role in the playbook. The role contains two folder, handlers and tasks. We'll focus on handlers in the next section, but for now we'll focus on the tasks folder. This folder contains a main.yml. This is the default filename ansible will look for when trying to find tasks for a specific role.
The main.yml contains the following configuration:
Here you can see 2 new things that we haven't seen in the configurations yet. The "when" attribute of the tasks from the previous blog post is missing and instead we have the "notify" attribute. These two attributes do the same thing in the sense that they are both running tasks, but only if the status of the task is "changed" instead of "OK". In other words, this means that the tasks in "notify" are only executed when the task makes a change to the state of the application. In this case, if we pull new changes from Git, only then will those tasks be performed. The difference between the "when" and "notify" attribute however is this: The when attribute is registered on a task, which means the order of execution won't change. The tasks that are executed under the "notify" attribute are handlers. Handlers are executed after all other tasks have been performed.
The order of execution then looks like this:
So if you have multiple roles that each call different handlers, all roles will perform their tasks first and then all the handlers that need to be executed.
The advantage of using roles rather than tasks is that the playbook stays small, but you're also creating reusable processes that can be added to multiple playbooks. The use of variables is very important in this case. The disadvantage is that you won't be able to see what the playbook is actually running and in which other the different tasks are executed. You have to look through multiple directories to be able to figure out what is running at which point in time.
Handlers are tasks, but they're executed at the very end of the playbook. If you were to compare this to a JavaScript execution cycle, you could say that handlers are additional tasks that are appended to the task list, not executed in between two other tasks. In the previous section, I showed you the folder structure we're using. Now lets see what's inside of the main.yml in the handlers folder.
This looks like the tasks from the configuration from Automating Laravel deployment using Ansible. The only difference is that the names are identical to the names used in the "notify" section of the task in the role. These are seen as unique identifiers within the role and it uses the name to figure out which handlers to run.
The advantages of handlers is that you can very easily perform certain tasks and "schedule" a cleanup command for example. That way it's not something that'll get in the way of executing the main tasks, but it's also not something you're going to forget.
The biggest disadvantage for me personally is that you're not able to give the handlers a descriptive name like you can for the roles and tasks.
There are several ways to perform tasks with Ansible: Tasks, Roles, and Handlers. They all have a different use case and they each have their advantages and disadvantages.
I hope this post helped you to understand the difference between the ways you can perform actions in an Ansible playbook. It took me hours to figure out what the difference was and how each of them worked, so I hope this cleared that up.
]]>If you, like me, have been deploying changes manually to any of your websites consistently for months, if not years, you know that this is a repetitive task. Usually, you pull your changes from your version control system (VCS), run a few tasks to install production dependencies and/or compile them, cache your configuration, and reload some kind of service. It's usually the same few steps with a few optional steps, in case you need to run database migrations for example.
You know that if something is repetitive, you can automate it. This is where Ansible comes in. Originally, Ansible is a tool to help with server orchestration and repeat tasks reliably on an infinite amount of servers. The best part of Ansible is that you don't need to install anything on your remote machines. The only requirement is that you need to be able to connect to your remote machine through SSH. If you can do that, you can use Ansible.
You could compare Ansible to a large bash script that runs commands on the remote machine through SSH. The main difference between these two is that Ansible makes everything much easier and has built-in modules for abstracting many tasks. Pulling changes from GitHub, specifying only a repository and a destination folder is one of these modules. It makes writing tasks much quicker and easier.
I mentioned that Ansible is originally used for server orchestration. As Ansible is essentially an easy to manage bash file, you can make it do anything you want to. This includes using it for deploying your websites, be it one 1 or 1000 servers. As long as you can use SSH on all of those servers, you can deploy on all of them.
As most of my websites are built using Laravel, I'll provide a simple configuration to deploy your Laravel website to your server, migrate your database, cache your configuration, and clear your views cache. This is very basic, but it's a starting point. This is not a tutorial, because frankly, I've just started out with using Ansible.
Then you'll need to create the secrets.yml file, you can use ansible vault:
ansible-vault create secrets.yml
Then fill it with these pieces of information:
github_user: your_username
github_token: your_github_access_token
To edit this file, you can use the following command
ansible-vault edit secrets.yml
This is still something I'm learning, so that's why this is not a full-on tutorial. But by putting this out there, I've already learned a lot of new things about using Ansible.
There will definitely be more content about Ansible, because I'm already loving how easy it is and how many modules are built-in. When I know more about how it works in-depth, I will write a tutorial for it.
]]>Two weeks ago I published a post about my own implementation of technical SEO for a project of mine, called: Technical SEO: How to add structured data to your website. This was the header image of that post:
That was an actual screenshot of the numbers in the Google Search console for one of my projects over the last 3 months at that time. Since implementing Structured Data on that website 3 weeks ago, I've noticed that I was getting a lot more clicks. This is what it looks like right now, keep in mind, this is a difference of 3 weeks:
That by itself is an increase of 73.28% of clicks on Google. Now the interesting part: Adding structured data, in particular FAQ items, shows a huge increase in clicks for that same time frame. From 0 to 1.8k in 2-3 weeks:
Those are the numbers for "Search appearance: FAQ rich results". That same timespan, since initially implementing FAQ structured data into the website, I've got an additional 1.8K clicks. Helping Google understand your website better and providing answers to FAQs directly in the search results works. At least, in my case, it works. Here's a screenshot of one of those results, including FAQs in Google.
This is not to say that this will have the same impact on your websites because your site might have a lot of clicks already and getting a 73% increase is just not possible. It could be that you have a lot of competition for certain keywords, again this will slow down the number of extra clicks you're getting. This website is located in a niche, but it doesn't take away what a huge impact adding structured data, in particular FAQ items, had on the SEO results.
]]>Command Line Interface (CLI) applications can automate your work in many ways. They can be used to build your applications, deploy code, run processes, and do all kinds of other miscellaneous tasks. Developers often favor CLI tools because they don't require a user interface, often have consistent behavior in different environments, and are much easier to configure and distribute. The Go community has become much bigger in the past few years and because of this, there are several CLI tools that have migrated from bash scripts to Go binaries. There is no better time to learn how to write a CLI application yourself.
In this post, we're going over how to build a simple CLI tool in Go. You might be wondering: why should I use Go for this? These are the reasons for me: the code is expressive, it can be compiled to a binary which is easy to distribute, and it's very fast. This post is not an in-depth tutorial on how to write Go applications, because the implementation of these scripts is up to you, the developer. That's why I'm going to stick with a simple "Hello, World!" application and add some complexity, so you can start building CLI tools in Go for your own projects.
This post assumes you have already installed Go on your system.
The basic "Hello, World!" application is a nice way to see some of the syntaxes in Go in action. To get started writing a simple CLI application in Go, let's begin with the "Hello, World!" application:
What does this code mean? First, we define this file as the entry point to our CLI application by specifying "package main". You can give the file itself any name you'd like. I usually stick with "main.go" to be clear about which file is the entry point of the application.
As this file is the entry point to the application we have to define a function that will be executed when you run it. You do this by specifying a "main" function. The body of the entry function is printing the famous "Hello, World!" to the terminal. So when we run the command:
go run main.go
You will see "Hello World!" in the terminal. Now that we can print something to the terminal, let's add some complexity and configuration options by working with flags.
Flags are used to configure an application. These flags can be used in a lot of different programming languages, including Go. For this example, I want to customize the message that's displayed in the terminal by passing some data to the application through flags. Let's see how we can customize the behavior or our CLI application based on some input from the user:
You can see that we define two flags for our application. One of the flags is called "message" and the other is called number. Again, these names are up to you and your needs. We define the type of flags (flag.Int, flag.String), enter the name of the flag, the default value, and a helpful message about what this flag means.
Now that we've added the help messages, we can run a help command:
go run main.go --help
This will return the possible flags you can use on this command:
Usage of /tmp/go-build456763591/b001/exe/main:
-message string
The message you'd like to print to the terminal (default "Hello, World!")
-number int
The number you'd like to add to your message (default 1)
exit status 2
This is helpful in case you ever need to run your binary but don't have the source code to look at. It's also great for distributing your application because you can tell the user which options are available. At the bottom of the main function, you can see that I'm passing the message variable like so: *message. This is because the message variable doesn't actually have a value, but it's a pointer to a place in memory. By adding the asterisk in front of it, you retrieve the value from memory and you can print it to the terminal like a normal string. The other variable, number, is passed to the Println method like "strconv.Itoa(*number)" because the value of *number is an int and not a string. Since Go is a strictly typed language, you'll need to convert it to a string before you can do any string concatenation.
So now that we can run the application like before, without the flags, and see a new text show up:
go run main.go
Shows: "This is the message you want to display: Hello, World! with number 1"
As you can see, the flags still have the default value. Now let's try adding custom values:
go run main.go --message "Hello, Internet!" --number 42
Shows: "This is the message you want to display: Hello, Internet! with number 42"
As the new message has a space, you'll need to use double quotes to treat the string as a single value. As you can see, the sentence printed to the terminal now contains the values you passed to the command.
Now that we know how to pass values to our CLI application, it's good to make this application actually do something for us. We'll write a simple script that reads contents from a file and writes those to another file. We want to customize the source and target file. The package we'll use for this is ioutil. This is a simple application, but by using these techniques you're able to write complex automations and build your applications in such a way that it does exactly what you need it to.
Let's look at the code for this scenario:
Like before, we define two flags. One represents the source file and the other the target file. In my case, I've created a source.json file and added that filename as the default value for the application. This is the content of source.json:
{
"message": "This is a message from the source file"
}
After parsing the flags, we read the contents of the source file by using ioutil.ReadFile(*sourceFile). This returns the data in bytes and also an error if something went wrong. If there was an error we display an error message in the terminal to notify the consumer of the application that something went wrong while reading the source file. Perhaps you didn't have a source file. If that's the case, the application lets you know by showing this message:
go run main.go --source src.json
Shows: "Found an error while reading the source file: open src.json: no such file or directory"
After displaying the message we make sure to exit the application because we don't have all the information we need to continue. By returning exit code 1, we make sure the terminal knows something went wrong. Now that we know we have the contents of the file in memory, we can write it to the target file by using "ioutil.WriteFile()", passing the target filename, the file contents, and the proper file permissions.
Again, we check if something went wrong and notify the consumer if that was the case. If everything went correctly you get the following output:
go run main.go --source source.json --target target.json
Shows: "Copied the contents of source.json to target.json"
You should now have a new file called target.json with the same contents as the source.json file. This is a very simple example, but you can see how you can capture the user input and using it to perform some kind of action based on that input. The number of different applications you can make with something simple as these input flags is as big as your imagination.
Writing CLI applications doesn't have to be difficult. Whether you write shell scripts, Node.js, PHP, or Go, they offer developers a very wide range of possibilities. CLI applications can make your life a lot easier by writing all kinds of automations in a lot of different environments. This post was all about writing CLI applications in Go because it's expressive, you can compile it to a binary, and the execution of these automations is very fast.
Now that we've covered the basics of building a CLI application in Go, the types of applications and options you offer to configure these applications are endless. For example, you could build an application where you perform certain tasks while offering the user to skip some of the tasks. This is all up to what you want and what your needs are. I hope this post showed you some new things that you could use to start writing your own CLI applications in Go.
If you'd like to talk more about this you can contact me on Twitter.
]]>Structured data is a way to normalize your data and Google uses it to understand better what your website and specific pages are about. This is where you can help Google by providing this kind of information on your website. In turn, Google can use this to improve the appearance of your website in the search results. If you've ever seen an FAQ, company details, and news articles in your search results, then you've seen what structured data can do for your website.
In this post, we'll go over a few common structured data types that you can add to your website today. Structured data might sound like something that's complicated, but all it is is a JSON object in your HTML content. These are the structured data types we'll go over in this post:
These are types that you can add to a blog, but you can pick and choose which one applies to your situation. You can find the full list of available data types on Google Developers. They may be worth checking out if your needs differ from what I'm describing in this post.
Structured data is a script with a type, it's not content that will be displayed on the page. This means you can place this code snippet really anywhere in your HTML file. It's easier to maintain if everything is grouped together at the bottom of the page. I'm using blocks and components for this website, so I always place the related structured data inside of that component. This could mean that the structured data for the breadcrumbs is all the way at the top of your page and the rest of your code snippets at the bottom of the page.
An article can be an actual article, but also a blog post. First, let's show the structured data for the previous blog post that I posted on this website. This will give you a good look at what this structured data looks like:
As you can see, this code snippet contains a few attributes to identify what kind of structured data we're dealing with: @type and mainEntityOfPage. The JSON object also includes a @context to tell anyone parsing this data that this is a schema of some sort. Then you find the dateModified and datePublished attributes. These dates will be used to display the publish and sometimes the modification date in the search results in Google. Notice how the screenshot below shows "22 Apr, 2020":
Google was able to add this to the search result because I made that information available through structured data in the HTML.
Then we have the "headline" attribute. In most cases, this is just the title of your article or blog post. The author attribute is a special attribute because this is a nested data type. You can recognize nested data types by the @type attribute. In this case, we make sure to specify that an author is a Person with the name "Roelof Jan Elsinga". This is not displayed in the screenshot above, but it does help Google to make better sense of the content on the page.
The publisher attribute, like the author attribute, is a nested data type that itself contains another nested data type (logo). You can add your own name or the name of the company you work for at the name for the publisher and the URL in the logo should point to your logo or that of your company.
The last two attributes are about the article/blog post itself. The image is an array of the images you want to feature for this particular post. In my case, this is always the header image, but you can add multiple images. The description is optional, but I use the same value as the description meta tag.
Breadcrumbs are a great way to show your visitors where they are on your website. Besides having visual breadcrumbs on your pages, you can add this as structured data as well. This will help Google to improve the breadcrumbs in the search results. Have a look at the screenshots below and notice the difference in the breadcrumbs.
The first one doesn't have any structured data set up, so Google breaks the URL apart and uses that as the title for the section of the website.
The second screenshot does have structured data with a proper title for the section of the website. This helps to improve the awareness of the location of the page within the website. Now let's see what this looks like as structured data.
Like before, we make sure to mark this JSON object as being a schema and we define this object is of type "BreadcrumbList". The next attribute "itemListElement" contains all depths within your website to get to the current page. This snippet is from the page behind the second screenshot, you can look at the page source of that page and you'll see this snippet in the HTML of that page. The attribute "itemListElement" is an array containing list items. Each list item is a nested data type.
The attributes of that "ListItem" are position, name, and item. Position means the depth of the page on your website, starting at 1. The homepage is always position 1. In this case, "Plant guides" is at position 2. Looking at the second screenshot, you can see this as the second item in the breadcrumbs as well. The item attribute in the structured data is the URL belonging to the current depth level you're looking at.
If you write tutorials or blog posts about topics you're an expert in, it's often a great idea to include an FAQ on the page for your visitors to get answers to questions they might have. These FAQs should be visible to your visitors as well as being included as structured data. If you only include these FAQs as structured data and not also include a visual representation of it, Google might punish you for this and your SEO benefits are gone. For the FAQ we'll look at the page we went over in the Breadcrumb section. Below you'll find a screenshot of the FAQ on the page as a visual representation.
This is a visual representation of the FAQ that users can interact with. Now that you have this available on the page, you can add it as structured data as well. This looks like the following snippet.
I've removed the last two questions to not make the code snippet too long and I've redacted the remaining two answers because the actual content is not important for this post. To create structured data for your page, set the @type of the JSON object to FAQPage. The mainEntity is an array of questions and answers. The questions and answers are both nested data types of Question and Answer. The question you're displaying in your FAQ section can be added to the "name" attribute of the question and you can add the given answer to the "text" attribute of the nested answer data type.
Google uses the FAQs to compile a list of questions and answers in the search results. When you're the one that answered the question in your content and/or structured data, you're answer and website get featured like the screenshot below.
This is a great way to drive more traffic to your website through Google searches.
If you have a business website, the chances of you having a logo in one way or another are fairly high. This logo is something you should have on the menu at the top but is also something you can add to your website using structured data. Along with the logo, you can add other business details like the website and the name of the business. Let's see what this looks like as a code snippet.
To define a logo, Google uses the "Organization" data type and only needs the "logo" and "url" attribute. The other attribute "name" is defined on the Organization data type but is not required by Google. So you can add this, but it's optional. This structured data is short and simple and that means you can add it to almost all of your websites. There are a few guidelines you should keep in mind when it comes to submitting a logo. You can read find these in the Type definition on Google Developers.
Have you ever seen a search result with an embedded search box and wanted that for your website as well? Well, that's what you can use structured data for as well. The most important part to get this to work properly is that you need a search page that takes a search term in the URL. An example of my own blog is:
https://roelofjanelsinga.com/articles?q=search+terms+here
This doesn't have to be a query parameter, it could also be part of the URL:
https://example.com/search/search+terms+here
As long as you can add the search query in the URL in some way, this will be possible for your website as well. Let's look at what this code snippet looks like for my website.
In this code snippet, we're defining a website with potential actions to take on this website. In this case, a potential action on my website is to search for blog posts. So first of all, we need to define this data to be of type "WebSite". We need to define the URL of your website and then add an array of "potentialActions". Each item in the "potentialActions" array is a nested data type. In this case, it's a "SearchAction".
A search action needs the URL of your search page and a placeholder for the search terms people can fill in in the search results on Google. This placeholder is called "{search_term_string}". This placeholder can have a different name if you prefer something else. The placeholder name is defined in the "query-input" attribute. So if you want to use "{placeholder_search_term}", you need to define this in the "query-input" attribute like so: "required name=placeholder_search_term".
There are many more structured data types that might be interesting for you to use on your website. You could add products, events, fact checks, and even recipes with ingredients. You can have a look at the full list of structured data types to pick and choose which ones are relevant for your website.
Google does a lot of assumptions to display your website in the search results, but you can help Google to improve the presentation of your website. You can add structured data to your HTML pages to define things such as article information, breadcrumbs, FAQs, a logo, and even which actions visitors can take on your website. This will help Google to understand what your website and individual pages are all about and how your website is structured. Over time this could have a positive effect on your SEO rankings and all you had to do was help Google to understand your website better.
If you have any questions about this post, you can read out to me on Twitter and I'll do my best to guide you to implement structured data on your website.
]]>So you want to host your static website on GitHub Pages? Excellent choice! In this tutorial, I'm taking you through the steps to host your static website on GitHub Pages and how you can deploy your own changes. After this tutorial, you will be able to automatically deploy your own website to the internet. Before we get into the steps you have to take to publish your website to GitHub Pages, I will outline a few options and limitations you have when using GitHub Pages for your website hosting.
There are a few options you have for hosting your website on GitHub Pages, these include:
In this post, I will not be going over how to use Jekyll in combination with GitHub pages. Jekyll is supported by GitHub, which means GitHub will do everything needed to deploy a Jekyll website for you. In this tutorial, we'll focus on deploying a plain HTML / CSS / JavaScript website.
The second option is to use the master branch both as the source of your website and the production website itself. What this means is that all of your source files, including build scripts, package.json file, and other "source" files like SCSS are accessible through the internet. For example, you'd be able to see the contents of your SCSS files by going to https://yourwebsite.com/scss/_header.scss. This is the easiest solution for hosting your website on GitHub Pages, but it's not the cleanest option.
This is where the third option comes in: using your master branch for all your source files and having a separate "gh-pages" branch that holds the production files for your website. This is a slightly more involved solution but is much cleaner than the second option. This is what this tutorial will focus on.
Hosting on GitHub Pages is great, but there are some limitations and gotchas. These include:
GitHub Pages is essentially a service that gives you a folder on a server. A web server like Nginx is serving the files as-is. There is no scripting layer and you can't execute PHP files for example. If you want to have a dynamic website, you can still do this, but you'll need to do it through JavaScript. JavaScript files are static files, so the server will be able to serve those as they are. When the files have been served, it's possible to retrieve data from any external service you have at your disposal.
Another thing that seems a little backward, if you've worked with CI/CD systems before, is that all of your production files will need to be in Git. Normally you only have your source files in Git, because the CI/CD pipelines build the production files for you. With GitHub Pages, you could accomplish this by using GitHub Actions, but we'll keep this tutorial simple and not go over that.
Enough talk, let's get our hands dirty and set up our GitHub Pages environment with all the automation you need to quickly deploy new changes to your website.
GitHub Pages can be done on private repositories, but for this tutorial, we'll focus on using a public repository for your website. The only difference between using private and public repositories for this is that you need to have a pro membership in order to use private repositories. If you already have this, you can skip to step 2 and follow along from there.
If this will be a new project for you and you still need to create a repository for your website, make sure to select the public option like displayed below:
If you already have a repository, but it's a private repository, you can make it public by clicking "Settings" and scrolling down to the danger zone:
Click on "Make public" and you're ready for the next step.
This is a step that's completely up to you. The aim of this step is to create some kind of content that can be displayed in the browser. You can make an entire static website or just create something simple and move on to the next step. For the sake of simplicity, I will only add an H1 to my project:
Before we enable GitHub Pages on this repository, you have to make a choice about which option you want to use for hosting your website on GitHub Pages. For the sake of this tutorial, I will use the third option: Using "master" as the source branch and "gh-pages" as the production version of the website.
To create the "gh-pages" branch, click on "Branch: master" and type gh-pages like below:
Then press: "Create branch: gh-pages from master". You will now have two branches in your repository: master and gh-pages. To enable GitHub Pages on your repository, click on "Settings" and scroll down the "GitHub Pages" section. You should see something like this:
As you can see, GitHub has already enabled this repository for GitHub Pages, because it detected the "gh-pages" branch. If you chose to stick with the master branch, you will have to enable GitHub pages here and select the master branch as the source.
Now when we visit the url displayed in the blue header, you will see your website:
Your website is now located at https://your-username.github.io/repository-name. But oftentimes you want your website to be at a domain you already own. To do this, first, you'll need to decide which type of URL you want to use for this. You can choose a main (apex) domain or a subdomain for this.
If you want to use the main domain (roelofjanelsinga.com) then you'll need to create a few new DNS records in the service where you manage your domains. This is what my DNS records look like:
All you need to do here is create 4 A-records for your main domain and point them to the IP addresses below:
185.199.108.153
185.199.109.153
185.199.110.153
185.199.111.153
You can set the TTL to whatever you like, in my case I set them to 1 hour.
Using a subdomain is much easier than using the main domain as your custom domain. You will still need to change your DNS records, but this time you'll only need to do this:
You create a CNAME-record for the subdomain you want to use, for example, amazing.yourwebsite.com. Set the TTL to whatever you want, I use 1 hour for this. Then the content of your CNAME should be your-github-username.github.io, in my case roelofjan-elsinga.github.io.
Now that you've set up your DNS records, you can submit your custom domain to GitHub. In the "Custom domain" field you can now enter whichever main domain or subdomain you chose and press "Save". A new file called "CNAME" will now appear in your repository. This will contain the domain you chose.
When using a custom domain, you might have to re-enable "Enforce HTTPS" for your website. It might take a while before this option becomes available because GitHub will verify your DNS settings. It might still give you warnings that your DNS settings are incorrect, but you can ignore these warnings if you followed the previous steps. This will automatically go away as soon as GitHub sees your updated DNS records. After a little while, you will be able to see your website appear on the domain you've chosen.
If you've already set up build scripts and your workflow is to your liking, this could be the last step you need to follow in this tutorial. Step 5 and 6 are about automating your build process and your deployments. These are nice to have, but they're not necessary to use GitHub Pages in any way.
If your deployment process becomes more difficult by using GitHub Pages, there is really no reason you should do it. This is why I'm including some scripts that you can use to automate your build and deployment processes. I make the following assumptions for these steps, as I can't cover every scenario out there:
I'm more than happy to help you out if your situation is different than the assumptions I'm making here, just contact me on Twitter and I'll help you out with this step.
To be able to make everything automatic, I'm going to include an NPM package called Husky. Husky is a tool for NPM that enables you to define git hooks right inside of your package.json file. Git hooks are essentially events that are triggered when you interact with Git. For example, when you commit a change, the events pre-commit and post-commit are emitted. Husky allows us to listen for those events and perform certain tasks. For this tutorial we only need pre-commit, so let's set that up. First, let's install husky:
npm install --save-dev husky
# OR
yarn add -D husky
Now, in our package.json file, include the following configuration:
{
// Sscripts, dependencies, etc.
"husky": {
"hooks": {
"pre-commit": "npm run prod"
}
}
}
Now, every time we commit a change in Git, "npm run prod" will be executed. You can replace this with your own build script, for example: "gulp build" or "webpack". In the case of my simplistic project, this could mean that we copy the index.html file to dist/index.html.
One last thing that we need to do is add new scripts to your package.json:
{
// dependencies, etc.
"scripts": {
"prod": "echo \"Add your build command here\"",
"postinstall": "node ./node_modules/husky/lib/installer/bin install",
"deploy": "git push origin `git subtree split --prefix dist master`:gh-pages --force"
}
// husky, etc.
}
The "postinstall" script executes after you run "npm install". This makes sure that husky runs properly. I recommend you run "npm run postinstall" to be certain that husky has done what it needs to do. The deploy command will be needed in the next step, so just copy and paste it for now.
Now our code will be built every time we commit a change, without having to think about it. Just how we like it.
We've already automated the build process, so now we can automate the deployment process. For this, we're going to make use of GitHub Actions. This sounds very intimidating, but I'll explain exactly what's going on and I'll give you the configuration you need for this to work.
First of all, let's click "Actions" in your repository:
Then click on "Set up a workflow yourself" on the right side. In the new screen, you can write your configuration, but you can remove everything that's there and paste this instead:
# This script deploys your website automatically
name: CI
# Only trigger this script when you push to the master branch
on:
push:
branches: [ master ]
# Specify the tasks to run when this script gets triggered
jobs:
build:
runs-on: ubuntu-latest
# Download our master branch
steps:
- uses: actions/checkout@v2
# Run our deployment command
- name: Deploying the static website to gh-pages
run: npm run deploy
This configuration takes care of deploying your website automatically when you push changes to the master branch. All the way at the bottom you can see that we execute "npm run deploy". This is the script that we added to our package.json in step 5. When you run "npm run deploy", the following command will get executed:
git push origin `git subtree split --prefix dist master`:gh-pages --force
Let's analyze what this actually does. This command pushes a single folder, in this case "dist", to the gh-pages branch. That is the branch that we chose as our production branch in GitHub. This command will get executed every time we push to the master branch. Now every time you push your changes, GitHub Actions will automatically publish your "dist" folder. So that's another thing you don't have to think about anymore when deploying to GitHub Pages.
I hope this helps you to start deploying your own static websites to GitHub Pages and feel more empowered to update your own content, without needing the help of complicated deployment systems or a development team. Deployment of static websites really doesn't need to be complicated or take any extra work. Through automation scripts, you can deploy changes by going through your normal workflow, just like you've been doing.
If you want to see an example of a website that's currently running in production in the way that I've described in this tutorial, you can have a look at sandervolbeda/personal-website on GitHub.
]]>Most developers use some kind of version control system (VCS). One of the most well-known VCS is Git and one of the most well-known services to host Git repositories is GitHub. Hosting websites has been one of those things that have been a pain for many developers for many years, but it doesn't have to be that way. You don't have to deal with SSH, FTP, or some other way to interact with a server just to host a static content website. There is a solution when you use GitHub to host your Git repository: GitHub Pages. I've written about GitHub pages before, about How to host a lightning-fast website on Github Pages. This post is not about how to host your website on GitHub Pages, but why you should consider doing so.
GitHub Pages is a service provided by GitHub to host your static website straight from your repository on the GitHub servers. This can be a static website in many different shapes:
As long as your website can be displayed by opening HTML files, your website can be considered a static website and you can host it on GitHub Pages.
There are many benefits by hosting your website on GitHub pages. Some of these are:
For this post I'm going to focus on the deployment aspect of these benefits. The ease with which anyone can now deploy changes to their website is really great, so I consider that the most important benefit.
Deploying changes becomes very simple because all you have to do as a developer or designer is push your website to a specific branch in your Git repository. When setting up GitHub Pages for your repository, following the steps I outlined in How to set up and automatically deploy your website to GitHub Pages, you will have specified the branch on which you want to host your website. Often times this is "master" or "gh-pages". Personally, I prefer to use "master". Anything that's on "master" is what I consider to be published already, so putting that thought into practice and using the master branch as "live" on GitHub pages is the correct approach. If your project is more than just a website, you can consider "gh-pages" as your default deployment branch.
By using the master branch of your repository as your published website, it means that any time you push your code to the repository it will be published nearly instantly. This has the benefit that you can now start to practice continuous deployment. You don't have to do anything manually after pushing your code, because GitHub takes care of this. This leaves you free to do other things, like writing more code. This also alleviates the pain of having to use SSH or FTP to publish your changes. Anyone with access to the repository is now able to contribute to your project. This includes people that otherwise may not have the technical skills to publish changes. This empowers you, your contributors, and/or your team. In this regard, GitHub Pages helps people feel like they can help out.
Because you can only host static websites on GitHub Pages, it means you will be able to work in the same environment on your local machine as on GitHub Pages. Mistakes that you make locally will show up on your website. This also means that fixing bugs is simple because anything that's broken on your website will also be broken in your local environment.
It comes down to this: your development cycles can become shorter. The time between writing code on your machine to it showing up on your website can become much shorter. Developers no longer have to worry about publishing code and content writers are now empowered to publish the changes they need to, without relying on the development team.
This post wouldn't be complete without some examples of different types of static websites hosted on Github Pages. We'll look at two different examples. One is created using Jekyll, a static site generator and the other is built using static assets, such as HTML, CSS, and JS files. These websites could be good examples to follow when hosting your own websites on GitHub Pages.
Aloia CMS is a project using Jekyll, a static site generator. You can view the website and source code on GitHub.
Sander Volbeda is a great example of a portfolio website using just HTML, CSS, and JS files to create a static website. You can also view the source code for that project on GitHub.
Updating both of these websites is as simple as pushing your changes to the master branch. GitHub takes care of the rest.
If you're looking at finding a simpler way to host your static websites, you should give GitHub Pages a try. It simplifies your development cycles and allows technical, but also less technical team members to contribute to the project directly.
]]>Raspberry Pi's are getting faster and can do more things in your house than ever before. If you've ever tried to set up a service on your Raspberry Pi, you know that one of the most important things you need for everything to work is the IP address of your Raspberry Pi. If you don't use static IP addresses for your services, the IP will reset after every reboot of the credit card-sized computer. This could make it so your services are no longer reachable and you have to go out of your way to update the new IP address in all places that you set it before. But luckily there is a very easy solution to avoid this situation: a static IP address.
Setting a static IP on a Raspberry Pi has a lot of benefits and is actually quite easy. In this post, I'll take you through 3 steps to get this working on your credit card-sized computer. Before we get to those steps, I'll explain what a static IP address actually is and why there are several ways of achieving the same result.
A static IP address means that your devices will have the same IP address on your LAN at all times, even after rebooting the computer. This has the benefit that you always know which services live at which IP address and it allows you to build complex systems using all kinds of devices.
I mentioned that there are multiple ways to set a static IP address. One of them is to assign an IP address inside of your router for your device. This is usually the best way to do this because it avoids any IP conflicts. The router will be the one to assign the IP addresses and there won't be any duplicates.
There is another way, and that's the way we'll go through in this post: assigning a static IP address inside of the device. This means that the device will ask the router to assign it to the requested IP address. So in simple terms, instead of the router telling the device: "Hey, you're 192.168.1.10" the device asks the router "Can I be 192.168.1.10, please?". This could cause IP conflicts if you have a lot of different devices that all need to be managed through the router. But I've personally only seen this in large office buildings and not in my own home.
Now that you know what we're going to do, let's actually do it! First, start your Raspberry Pi by plugging it in and open a terminal. You can do this either through the device itself of SSH, but I recommend doing it through the device itself.
Open the configuration:
nano /etc/dhcpcd.conf
Now go all the way to the bottom of the file and add these lines:
interface wlan0
static ip_address=192.168.1.10/24
static routers=192.168.0.1
static domain_name_servers=192.168.0.1
Let's go through this line by line:
When you've added/updated those values, you can save the file and run the last command to make these changes take effect:
sudo systemctl restart dhcpcd
This will restart the network service and request your static IP. If you don't have any network connectivity within 10-20 seconds you might have run into an IP conflict and you'll have to repeat the previous steps, selecting a different static IP address.
If you do this through SSH, you will be logged out after you run the last command, because the device will temporarily be disconnected from the network. After 10-20 seconds you can log in using the static IP you've selected.
Setting a static IP address in your Raspberry Pi is a simple 3 step process and has countless of benefits. One of them is that it makes your services easier to find within your network. You'll be able to find any existing services on the same IP address, even after restarting your Raspberry Pi.
If you have any questions or suggestions on how to do this more easily, you can find me on Twitter.
]]>Prioritizing is difficult, especially when you have many different things you could do. Choosing the next feature to work on for your web application feels like prioritizing tasks, they all seem important and there is no way you can do multiple at once. In this post, I'll go over a few simple things you can do to choose the next new feature you should work on.
When building an application there are always at least five things you want to implement at the same time. As a challenge to yourself, you should try to not build any of them and see which one of them starts to hurt you first. By hurting you I mean the following: it takes you more time to do something because the feature isn't there. Features are all about allowing you to do something quicker.
As a very rough example you can think of this: To create a blog post, you don't need a form, you don't even need a page. All you need is a database client. You can enter your blog post straight into the database and never need any pages to manage this. This is fine as long as it's a quick task. As soon as the lack of a form starts to cost you valuable time, you build the form and the ability to manage your blog posts more efficiently. This is a ridiculous scenario, but you can transform this into your own situation.
Software developers are lazy, and that's exactly what you should be. Being lazy means that you'll find the quickest and easiest way to solve a problem. This same idea goes for picking your next feature. There is no point in spending hours on something that 90% of your visitors won't see. Instead, find something that can be done quickly, takes very little effort but will help a large majority of your visitors.
As an example, let's go with something like designing your administration dashboard. This is the dashboard from the previous example, where you can write your blog posts. You're likely the only one that will ever see that dashboard, so you can make it exactly like you want to. However, since you're the only one that uses it and knows exactly how it works, there is no point in making this very pretty. Sure, it might look great and you can show it off to others, but your workflow might not improve. Instead, add something that helps you to spread your blog posts, like an RSS feed. This can help you to effortlessly cross-post your content to other websites, without manual actions from you.
"Jobs to be done" is an amazing way of making your application resonate with your target audience. Instead of adding features that you think might benefit your visitors, really ask yourself "What is the visitor trying to do". In case of a blog post, they're probably trying to find that one line in your post that gives them what they need. In this case, add a full-text search for all your content. Let them find that line as quickly as possible. For a blog, it's obviously better if those people stay on your website longer and read the blog post, but they should do this because they enjoy your content, not because you're tricking them into staying on your website.
It's tough to choose which feature you should work on next, but these three steps will make it easier for you:
By doing these three things, you will work less on features that might not actually be work the effort you're putting into it, and more on the feature that will benefit you and your visitors. I've used these guidelines for my own projects and so far it has worked well. It has caused my projects to be leaner because less unused features make it into the applications and the features that are there work well enough to not cause any pain. It might be worth checking out for your current or upcoming projects.
]]>Aloia CMS is a content management system I'm actively developing, so I see this post is a milestone in my development process. Before I explain how Aloia CMS has made me more productive, let's get into the points that I see as "being productive". I love writing blog posts, that's why you see one here every single week. What I don't like about most content management systems I've used before is that you need to have a certain workflow to be able to work with it. I love writing, but not the effort it takes to start writing a simple post. I want to avoid hurdles and just write.
Avoiding hurdles at all costs is one of the reasons I've developed Aloia CMS. In the beginning, it was just a headless CMS with markdown/JSON files as a "database". This was great because writing in Markdown is something I enjoy doing. But, when I didn't have a laptop and wanted to write a blog post, I wasn't able to. I needed a laptop to write my markdown files, publish them to GitHub and publish the changes on my server. Hurdles like this will make me lose my motivation to write something. As I mentioned, I don't want to go through hurdles, so I created a dashboard that's accessible on my phone. This allowed me to write and publish on my phone. Problem solved! Right?
As often happens, the wishes and requirements of websites change. So did mine. From a simple blog with some recent work, I wanted a website that I could easily extend with extra content types, new pages, and custom content. The old version of Aloia CMS (version 0.x) was not flexible and took a lot of effort to set something like this up. The available content types were baked into the CMS because this was completely fine prior to these required changes. I needed a way to make this flexible, so I looked at how Laravel solves this.
Laravel makes uses of a "Model". If you want different content types, you can simply extend that model and add custom behavior to it. This was exactly the kind of flexibility I needed from a CMS, so that's what I built for Aloia CMS. This feature was released in version 1.0.0 in February of 2020 and served its purpose well. The CMS became leaner because of this.
I hate hurdles and I didn't want to ruin anyone else's day by publishing a breaking change to the CMS without defining a very clear upgrade path. I provided a simple command that migrates all content that was managed by the CMS to the new format. Running this command took a few milliseconds. That's a lot quicker than manually migrating any content that was previously supported. Providing this helper helped me to migrate my entire website from version 0.x to 1.x.
With the release of version 1.0.0, I deprecated most of the old code but kept it in place. This old code had one purpose left: migrate the old content to the new format. These deprecated scripts would make it more inviting to upgrade from 0.x to 1.x because theoretically, you didn't "have to" migrate your content. The old code still worked. If you want a new version with some new shiny features, but don't want to migrate, you're still able to use the system. As with all deprecations, I removed all legacy code in the next major version (2.0.0).
In the old version of Aloia CMS, the file structure was all over the place. It needed multiple files to manage content and metadata. When I started to post on dev.to and created a documentation website for Aloia CMS, I was introduced to the concept of front matter. This was a huge revelation because this allowed me to keep the content and metadata in a structured manner in a single file. For me, this was the way forward. Starting with Aloia CMS 1.0.0, front matter was the way to embed metadata into your content files. All content types have some metadata, which meant that I could put this functionality into the base Model. Any model that extends the base model can now easily save metadata and content to a file without having to worry about the underlying code. The CMS was once again working for me, not the other way around.
Throughout this entire process I've gone through iterations of "How can I annoy myself less". If I find something that seems weird or looks like a hurdle, it's something that most likely gets changed in the next version. Since Aloia CMS at the core is still a headless CMS, it has no specific workflow. By consciously separating the dashboard and underlying CMS, I was able to create a dashboard that does exactly what I want it to. If you have different needs for a dashboard, you can very easily build something yourself and interact with the content that way. Workflows are just a highly individual thing and really shouldn't be something forced upon you by the creators. The creators should give you the core functionality and allow you to shape your own workflow.
If you have a Laravel project that you'd like to add basic CMS functionalities to, you should have a look at Aloia CMS. Even though I'm highly biased because I built it, it's actually a really nice way to add a content management system to your Laravel application without needing a database or any other external dependencies.
]]>If you've ever used a Github integration, then you'll now you can verify your Git commits. In this post, I'll go over the steps you need to take to accomplish this for your own development system. After you've completed these steps, the commits you've done will have a "Verified" flag in GitHub.
Before we get into it, it's probably a good idea to explain what the verified tag means. It means that when you commit code, the commit is signed with a key, the GPG key. This key contains information about you, like your name and e-mail address. When you submit your public key in GitHub, GitHub can verify that the signed commit was created by your account. All it means is that anyone with access to the repository can see that the commit was made on your system by somehow who knows the passphrase to unlock your public key. Ideally, this can only be you. It's a way to verify that you were the one creating the commit and no one else.
Now let's get into the steps you need to take to get this GPG key and start getting the "Verified" flag. For this post I'm focusing on how to do this on a Linux distribution, because that's what I use on a daily basis. You can find out how to do this for your preferred platform on the Github help pages.
You can find out if you already have a GPG key by running the following command:
gpg --list-secret-keys --keyid-format LONG
If you have no keys available, or you want to create a new one, go to step 2. Otherwise you can skip ahead to step 3.
When generating a new GPG key, you'll need to fill out some personal information. This information will be used by GitHub to verify that it was you who made the commit. To generate a new key, run the following command:
gpg --full-generate-key
It will prompt you for some options. When it asks for which type of key, select the default choice (RSA and RSA). Then it will ask you for the key size, fill in 4096. Then it will ask you when you want this key to expire. I went for 0 (never expire), but you can choose another one if you need to.
Then it will ask for your name and e-mail address, along with a comment. The comment can be used to identify the key. For example, fill in your company name if you're on your company computer. When filling out your e-mail address, make sure it's the same e-mail address you used to sign up for GitHub.
When you go to the next step, you need to fill out a passphrase (or password). Choose one that you can't guess easily. Preferably use a random password generator with 16 or more characters. Be sure to save this passphrase somewhere, because you will need to fill it in when you commit your changes.
When you're here you already had a GPG key or you just created a new one. Let's verify if your system can see your key by running the command from step 1 once again:
gpg --list-secret-keys --keyid-format LONG
If you see your key, you're ready to submit it to GitHub. When running that command you should see a section that starts with "sec" like below:
sec rsa4096/gpgIdentifier 2020-03-18 [SC]
Copy the "gpgIdentifier" part of that line (no rsa4096/ attached), because this represents the identifier for your GPG key. I replaced my key in that last line to make clear what you're looking for. We're going to use that identifier to find out what your public GPG key is. This public key is what we'll submit to GitHub. Using the identifier, run the command below:
gpg --armor --export gpgIdentifier
You should now see your public code, starting with "-----BEGIN PGP PUBLIC KEY BLOCK-----" and ending with "-----END PGP PUBLIC KEY BLOCK---â". Copy this entire key, including the lines I mentioned in the last sentence.
Now that you have this code, go to GitHub â Settings â SSH and GPG Keys. Now you have to scroll to the bottom until you get to the GPG keys section and then press "New GPG key". In the next form, paste your GPG key and save.
You now have generated a GPG key and submitted the public key of this to GitHub. The last thing left to do is tell Git to use your key to sign your commits. You have two options: Do this for a specific repository or do this for all your repositories. Remember the gpgIdentifier from the last step? We need that one last time, so be sure to copy that again.
First, we need to tell Git that we want to sign commits. To only enable signing commits for the current repository, run this command:
git config commit.gpgsign true
To enable signing for all repositories in your system, run:
git config --global commit.gpgsign true
Now that Git knows we want to sign commits, we need to specify which GPG key we want to use for this. Again, you can do this for specific repositories or for all repositories on your system. To Tell git to use the GPG key we just created for the current repository, run:
git config user.signingkey gpgIdentifier
If you want to do this for all your repositories, run this command:
git config --global user.signingkey gpgIdentifier
Git now knows that we want to sign commits and which GPG key to use to do this. Now when you commit code, you will need to enter your passphrase, but in order to do this properly, you need to tell your GPG agent how you want to input this passphrase. A simple trick is to add an environment variable to your ~/.bash_profile or ~/.profile by running these two commands:
test -r ~/.bash_profile && echo 'export GPG_TTY=$(tty)' >> ~/.bash_profile
echo 'export GPG_TTY=$(tty)' >> ~/.profile
All that's left to do is write code and commit your changes. When you commit, your system will ask you for a passphrase. This is the passphrase you filled in when generating the GPG key, so fill that in. I'm using Ubuntu and when prompting for my passphrase, they give you an option to save this passphrase so you won't have to fill it in every time you create a commit. You can do this or not do this. It's obviously safer to fill in your passphrase all the time, but it also takes longer. I'll leave it up to you which you prefer.
When you push your changes to GitHub, you will now see that beautiful "Verified" flag on your commits.
]]>Recently I've replaced the search functionality on cro-tool.com from a database-driven search engine to Elasticsearch. This wasn't an easy process and there were several hurdles to overcome, but the final result was worth all the effort it took to put this together.
Let's start at the beginning. Before this transition, I had to specify which fields I wanted to search in, how these results should be ordered on the search page, and how the fields should be matched (LIKE %%, etc.). This meant a lot of manual work went on to displaying search results correctly. Joins and where clauses were used to find records, but what happens when none of the results matched the SQL query? Well, there were no results. SQL databases don't really understand relevancy in records, so it took manual work to accomplish something similar to what's already built-in into dedicated search engines.
The last problem I highlighted in the previous paragraph, displaying relevant search results, was something I wanted to solve because I'd rather show less relevant search results than no search results at all. A search engine is great at enabling you to do this. To find a good option for a search engine, I looked at my own experience. I work with Apache Solr on a daily basis and I love using it. But I also know it can be heavy on the resources. So I went with Elasticsearch instead. I knew the server resources were very limited, so going with an actual search engine was already a gamble. I thought that Elasticsearch was lower on resource usage, but this thought is based on thin air and I haven't actually checked if this is accurate.
I knew Laravel has an official package, called laravel/scout, which has a few implementations that enable you to use Elasticsearch as the search engine, instead of Algolia. This was another reason for me to use Elasticsearch because it'd be able to quickly integrate it into the existing Laravel application, without rewriting a lot of the existing logic.
I knew I was going to use Elasticsearch and I knew I was going to use laravel/scout, so I searched for a composer package to connect these two and found matchish/laravel-scout-elasticsearch. It seems to be fully featured and even allows you to customize the request before it is sent to the search engine, perfect!
After installing the package and publishing the configuration files I went to work and set up the code needed to be able to index documents into Elasticsearch.
If you know me, you know I'm a huge fan of Docker and docker-compose. So naturally I set up the docker-compose.yml file to launch an Elasticsearch server for me, this is the configuration I used:
version: "2.3"
services:
elasticsearch:
image: elasticsearch:7.6.1
container_name: elasticsearch
environment:
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms256m -Xmx256m"
ports:
- "127.0.0.1:9200:9200"
volumes:
- ./storage/elasticsearch:/usr/share/elasticsearch/data
As you can see, I tell the Java runtime engine to only use 256mb of RAM. This is a very small amount, but the data set is not very large and I'm dealing with a server with low resources. The docker-compose file also tells the Elasticsearch container to open up port 9200, but only on the localhost. I don't want to open this server to the internet and only applications on the same machine can access it directly. As you can see, I've included a volume, so all data in Elasticsearch will be written to the storage/elasticsearch folder in your Laravel application. This folder obviously contains a .gitignore file, because I don't want this data in my Git repository.
When the search functionality was converted from database queries to API requests to Elasticsearch, it was time to deploy everything. Everything went well...until I tried to run the Elasticsearch docker container. The server didn't have enough RAM to run the Java Runtime and the container refused to launch. This was the moment I had to get creative with my solutions.
I had two options: upgrade the server or migrate the current Apache installation to Nginx. I chose the latter for the following reason: Apache uses more RAM, even when no one is visiting the website. It spawns workers and keeps them open until new visitors are there to use them. Nginx, on the other hand, is able to spawn workers as they're needed. This means that if there is no one on the website, it's not spawning any workers and that means it runs much lighter on idle. When there are more visitors, Apache keeps spawning workers for new visitors and this adds up over time. Nginx uses these workers much more efficiently and is able to serve these visitors with fewer resources. This is why I thought that switching to Nginx might give me enough free resources to be able to launch Elasticsearch.
After the migration, I was indeed able to launch Elasticsearch, as the RAM usage was several times lower using Nginx than Apache. Even when multiple people were visiting the website, the RAM usage stayed under control. This is partly because I specified the maximum amount of memory the Java Runtime is allowed to use for Elasticsearch.
Running Elasticsearch on a server with low resources is possible, but it's not something that's going to be easy to accomplish at all times. For this particular situation, I had to migrate the current Apache installation to Nginx in order to launch the Elasticsearch container with Docker. The final result is great because you get the functionality of a full-blown search engine on server hardware that would otherwise not even be considered to be enough. Of course, this is not a permanent solution, because as soon as the website gets busier, the server resources will need to be upgraded. This solution does allow me to drag that process out a little bit longer though, and that's never a bad thing.
]]>Peppermint OS is a lightweight Linux distribution, it only consumes about 330mb of RAM on idle. This means it's a great choice for older and/or less powerful devices, but it's also amazing for the high-end devices. If you run this on your higher-end devices, Peppermint OS is blazing fast. But that's not one of my favorite things about Peppermint OS, even though it's amazing. My favorite feature of Peppermint OS is ICE, a simple SSB manager. In this post, I go over what ICE is and why it's absolutely amazing.
Ice is a simple SSB manager. So what does that mean? SSB stands for Single Site Browser, which essentially means that you open a browser, to view one website and one website only. This seems very counterproductive, but let me explain why this is such a cool concept. With ICE, you can essentially create an application that starts a browser in a container to load the website you've specified. This means that any data you save within that container, won't be shared with other containers. It's a bit like a docker container or snap package running on your operating system, but for websites.
Before I had ever used Peppermint OS I had to open a browser and type in the URL of a service I wanted to use. If I was lucky, there was a snap package or native application available, so I could simply search for the application I wanted. When I had a snap package, everything was fine, but I still had to install an application that took up space. On my phone I had the feature to install websites onto my phone, so why wouldn't this be available on desktop systems? Well, that is pretty much was ICE does. You can give it a name, fill in a URL and tell it to load the favicon from the website you've specified. When you submit the form, you have a new application in your menu.
This allows me to very quickly access all the online services I use, like Netflix, Prime Video, Nextcloud, Notion, and many more. I never have to install an application to make these services appear, because it opens these websites in Firefox, which was already installed. With ICE I can create applications from any and all services that I use only and search for them inside of my start menu. This makes accessing these applications very easy and quick.
I've explained what ICE is and why it's amazing, but how does this fit into my current workflow? Well, I've been using Linux, but specifically Ubuntu for a few years now and I know that I can press the Windows button on my keyboard to open the start menu. In most distros that I've used for the past few years, you can start to type and it'll automatically show you the applications that match your search terms. This is great because I never have to use the mouse to go through menus. When I can create shortcuts to online services through the browser, I don't even have to open the browser first and go to the website, but instead, I search in my menu and press enter. This has helped me to be much more efficient with launching the applications I need on a daily basis.
The fact that Peppermint OS has such a low memory usage, and I have installed it on a higher-end device, makes the whole process: from pressing the start menu button, searching for the application, and seeing the application on my screen; take mere seconds. Even for people that haven't used Peppermint OS or even Linux before will understand how the start menu works and where the custom applications are.
Using ICE SSB in my current workflow has increased my productivity a lot. I don't have to spend time going through menus to launch the application I want to launch. I don't even have to manually open a browser and navigating to a frequently used website anymore. I can do all of these tasks from my keyboard. The fact that this great piece of software is already installed when you open Peppermint OS for the first time, makes the whole OS a huge productivity hack for me. If you're interested in installing this software on any distro that's not Peppermint OS, you're in luck, that's possible. I've found that it works reasonably well in Ubuntu with Gnome, but I do miss the blazing-fast performance Peppermint OS gives me at times.
]]>Code is great and it makes your application do the things it needs to do. But what happens when you hire a new person on the team or you start to use new technologies to accomplish your goals? You need some kind of reference to explain the flows within the application code base and code is usually not great at providing this overview. You need proper documentation to explain the code, company jargon, and application flows. In this post, I'll go over three levels of documentation that you have in an ideal world. I understand the ideal situation is rarely the reality, so in that case, use bits and pieces of these levels.
The levels I'll be talking about are the following:
When you're new to a team, you won't understand some words that the people around you use on a daily basis. This is normal. At some point, you'll need to know what they mean to be able to contribute to the business and specifically the application. In other words, you need to know what the domains in the company are. Domains are a concept from DDD (Domain-driven design). This means that you structure your code into very specific use cases of your application. The language used here is business-specific, not programming language-specific. This means that if you're talking about a specific domain, let's go with "Checking out", you can talk to anyone in the company and they'll be able to understand what you're talking about.
Okay...so what does that mean? Well, it means that you need to document all the domains that exist in your application and in your business. New hires won't exactly know how your business works in detail, so documenting the different domains and jargon will allow them to quickly get up to speed when talking to anyone in the company. By specifying exactly what each domain and jargon means, you leave no room for confusion about what the other person might refer to.
Let's go through an example:
While [domain] I encountered a bug in the system.
You can replace domain with something specific to the business, in this case, let's use "Checking out". This makes: While checking out I encountered a bug in the system.
When you don't have a clear application structure and no documentation about how to process goes through your application, you're going to spend a lot of time searching through code to find where the bug occurred. When you have a very clear application structure but no documentation, you'll at least know in which section to look for any bugs. You won't know exactly where, but the scope of your search is greatly reduced. When you don't have a clear application structure, but your documentation is great, you'll be able to pinpoint the right place to fix the bug and if you have both a clear application structure and great documentation, you'll have the bug fixed in no time.
So documenting domain and jargon language helps you to get your new hires up to speed quickly, but it also helps to solve bugs more quickly.
When you're new to a codebase, it can be very difficult to figure out how processes flow. You can find entry points and responses quite easily, but finding out how processes work by simply looking at code is very difficult. Finding the main processes within a system is usually easier to find if the application is structured well. If the main purposes of the system aren't immediately clear by glancing at the folder structure, this is something you should document.
Having very good documentation but a very messy system is still acceptable because at least there is a clear path through the application if you can reference a manual. If you have clear documentation and a very clear application structure, then you have a unicorn on your hands and you should do everything in your power to maintain this. You can do this by continuously writing documentation, writing tests, and refactoring code that isn't up to the standard that has been set.
So the main goal of documentation is to explain how the application works, but you're also supposed to document your code right? Correct, but that's only one of the levels you should document. So let's get to that right now.
We're finally there the easiest, but often overlooked part of this documentation journey. Documenting code is what most teams do, but not all teams do this as effectively as they could. I've encountered my own documentation mistakes plenty of times. This is where I use self-documenting method names in my code, but then add a comment that essentially copies the method name.
For example:
generateTransactionObjectForPurchase
with the amazing comment:
This method generates a transaction object for a purchase
Then I just think to myself: "Great, that literally told me nothing about why this is needed". A much better comment would be something along the lines of:
We need to generate specific transactions from this purchase to be able
to submit this to a payment provider.
It still stays true to the method name, but it explains why this part of the process is necessary and where it fits into the flow. When the time comes when this gets refactored, the developer knows why this code is here and what purpose it serves. It could happen that the code is no longer necessary, in which case you can just delete it. But if the documentation doesn't explain this, you won't know if it's needed without specifically going through the code. This developer might be you a year down the line, so do yourself a huge favor and explain why code is there. It's fine if the explanation spans multiple lines because it'll only help you to debug the current situation quicker.
In this post I've explained why documentation is very important for a project on three different levels:
Documentation is important to get new hires up to speed with the company jargon, the application flows, and the code, to make communication between different people within the company go more smoothly, and to be able to locate and solve bugs quicker.
Having an application structure that very clear to understand by just looking at folder names is very difficult and takes time. If you can't take care of that (right away), having great documentation will still help new hires to contribute to the application faster.
If you have anything to add to this post, I'd love to hear from you on Twitter!
]]>Setting up your personal cloud has never been easier than now and it allows you to stop relying on big cloud providers like Google and Dropbox. In this post I will go over the following aspects of this set-up:
You can find a Raspberry Pi image on the internet that allows you to have Nextcloud set up already. This is great, but this is not something I used. I use a Raspberry Pi 4 4gb, which means I can run multiple services besides Nextcloud on a single machine, so I wanted the operating system to be as plain/vanilla as possible. Luckily, there are many ways to install Nextcloud and one of them is to use Snaps. This is a way to very easily install applications in their own container, which means they're generally safer than installing them directly into your operating system. This only applies if the snap doesn't use classic mode, which this snap luckily does not.
If you want to install snaps on your Raspberry Pi running Raspbian, you need to install snap itself first. You can do this by running the following command:
sudo apt-get install snapd
Once this has finished, you can install snaps on your Raspberry pi, just as you would on any other Linux distribution. To install the Nextcloud snap, run this command:
sudo snap install nextcloud
After this finishes you have Nextcloud installed onto your Raspberry Pi. All that's left to do is run the program. You can do this through the snap command:
sudo snap run nextcloud
You now have a running Nextcloud instance. To access this instance on another computer in your network, you'll need to figure out what your local IP address is. You can find this on the Raspberry Pi by running:
ifconfig
and looking for an IP address that starts with "inet 192.168". If you type this IP address into your browser on another device in your network, you'll get the page to create an account on your Nextcloud installation.
Now that you have your Nextcloud installed and running on the Raspberry Pi, it's time to plug in your external hard drive. This is the part where I recommend using a Raspberry Pi 4 instead of the Raspberry Pi 3. The Raspberry Pi 4 has 2 USB3.0 ports and the Raspberry Pi 3 only has USB2.0 ports. The USB3.0 ports will make sure you can read and write your data 10 times faster than the older ports. You can still use your Raspberry Pi 3 though, but the result won't be as fast as it could be.
You can now plug in your external hard drive and run the following command on your Raspberry Pi:
lsblk
Find your hard drive in the list. This is most likely the device with "sda" as a name. Verify this by looking at the size of the disk in the same row. For me this says 7.3T, as I have an 8TB drive.
Write down the device name, as we'll need this for later.
Now that we know the name of our hard drive, we'll mount it in the file system, so we can tell Nextcloud where to save the files. First of, we're going to create a folder for the hard drive to live, you can do this as follows:
sudo mkdir /media/harddrive1
You can name this folder differently if you want, but make sure to remember this name, because we'll need it for the following step. Now that you created a folder where your hard drive will live, it's time to mount it into the file sytem. This is easier than it sounds, but you will need the terminal for this. We're going to mount the hard drive in the "/etc/fstab" file, which means the hard drive will automatically be mounted when you restart your Raspberry Pi. To do this, run the following command:
sudo nano /etc/fstab
A file will be opened in the terminal. You will see two rows that start with "PARTUUID". Move your cursor down with the arrow keys and create a new line under those two lines. Next copy and paste the following snippet in there. To copy and paste, you'll most likely have to copy the text and right-click in the terminal to paste. Most terminals don't support pasting with "CTRL + V".
/dev/sda1 /media/harddrive1 auto uid=1000,gid=1000,noatime 0 0
Let me explain what happens here:
/dev/sda1 is the name of the hard drive, the sda1 part is what you saw when you ran the "lsblk" command, so be sure this is correct
/media/harddrive1 is the folder you created earlier. So if you chose a different name, be sure to enter that here
uid=1000,gid=1000 is the id of the current user and the id of the group. If you've created a different user than the default user that came with Raspbian, find out the uid by running:
id -u
and the gid by running:
id -g
and fill those in the line instead of 1000.
You have now mounted the hard drive in the file system. It's time to go back into Nextcloud.
You've installed Nextcloud and mounted the hard drive into the file system of your Raspberry Pi. Next, you need to go into your Nextcloud dashboard and click on your user in the top right. A menu will appear and you need to click on "Apps". On the new screen, you click on "Disabled apps" in the menu on the left side. Next, click on the "Enable" button on the "External storage support" entry. This will enable the ability to use external storage in Nextcloud, such as your external hard drive.
When the plugin is enabled, click on your user in the top right again and choose "Settings" this time. On the left side, you'll get a menu and under "Administration" you'll find "External storages". Click on that and you'll see a new page where you can add external storages. To be able to add your external hard drive, choose a name for "Folder name". This could be anything you want. I simply named it "Hard drive". In the drop-down with "Add storage", select the option "Local". The form will change and you'll be able to fill in more information. You can skip all of the fields except "Configuration". This is where you fill in the folder you created earlier, so if you've been following the examples, this should be:
/media/harddrive1
Once you've filled that in, you can press the checkmark at the end of the form and a green checkmark will appear at the beginning of your form. You can now go back to your files by pressing the Nextcloud logo in the top left. Your hard drive should now be visible in the list of files and folders.
You now have a fully functional Nextcloud installation, but it's only accessible from within your own network. In a lot of cases, this is enough and you can stop reading any further. But if you want to be able to access your Nextcloud from anywhere in the world, keep on reading.
I won't cover all of these steps in detail, as some are optional and others are different for everyone. These are the general steps of exposing your Nextcloud instance to the internet and at least have some basic protection. Also, another thing to keep in mind is that I've been able to do this part because I could use a domain name that I owned and I'm not sure how to do this if you don't have a domain name you can use for this.
The steps are:
Open port 80 and 443 on your router and point them to the machine running Nextcloud
Point a domain name to your external IP address, use a service like https://whatismyipaddress.com/ to find your external IP address.
Run this command to enable HTTPS in Nextcloud:
sudo /snap/bin/nextcloud.enable-https lets-encrypt
sudo snap run nextcloud.occ config:system:set trusted_domains 1 --value=your.fancy.domain
sudo snap restart nextcloud
You can now go to the domain you used earlier and see your Nextcloud instance, complete with SSL. Your installation is now available from anywhere in the world and you have some basic security measures set up, so your connection to and from Nextcloud is secure. Any further steps are also new to me, so I won't be able to help you with this yet. Hopefully, in the future I can add to this post to add extra security measures and to make sure your data is even safer than it is now.
I hope this post helped you to set up your Nextcloud instance and expose it to the internet. If you have any questions, you can contact me on Twitter and I'll do my best to help you out.
]]>About 4 years ago I wrote my very first blog post "Researching home servers". In that post I talked about my Raspberry Pi 2 and using FreeNas to accomplish my goals of building my own file server at home and access it from anywhere. Well, it has finally happened and I didn't use FreeNas to accomplish this.
In order for a Raspberry Pi to work in this setup, I needed the latest version, the Raspberry Pi 4 4GB. This new version of the Raspberry Pi has USB 3.0 and built-in Wi-Fi. This makes it the ideal machine to run all the time, as it doesn't consume a lot of energy, but still be powerful enough to deal with multiple reads and writes at the same time. I specifically looked for a way to use Wi-Fi instead of ethernet to connect the Raspberry Pi to the network. This might be a controversial choice because a lot of the time you should use a cable to get the best internet speeds. I don't like to mess around with cables and the wireless connection is just as fast as the wired connection for my devices. The fewer cables the better in this case.
The micro SD card on the Raspberry Pi is only 16GB and that's obviously not enough to be able to store all of my data from my machines. So I went for a future proof hard drive that won't be full for at least a few years. The external hard drive connects to the Pi through USB 3.0 and is mounted into the Partition table under "/etc/fstab". This helps with the reliability of the availability of the hard drive. By mounting it in the filesystem it could be unmounted at any point and you wouldn't be able to store any data in it anymore. Actually adding the hard drive as a partition ensures it's available where you say it should be available unless something goes wrong that is.
To connect the external hard drive to the Pi and make it available in the network I use Nextcloud. This is meant as a local Dropbox-like environment. So no FreeNas like I was initially thinking of using. Unlike FreeNas where you can create a Samba share and map this as a network drive in your operating system, Nextcloud works through an app or through the browser. This is where you can use it just like you would Google Drive and Dropbox. I chose this solution because I wanted something that took the least effort. I had tried OpenMediaVault as well, but this disabled the network capabilities on the Raspberry Pi and I had to reinstall Raspbian.
Another reason I went for Nextcloud is the fact that you can install the software through Snap packages, which means I'm no longer bound to a specific Linux distribution. Initially, I tried to run Ubuntu Mate and Ubuntu server on the Raspberry Pi 4 but this didn't go as well as I expected. I went with Raspbian because it's lightweight and developed by the Raspberry Pi team themselves, which means it has to work with all models.
The fact that I can now run my personal cloud on a Linux machine, as opposed to FreeNas which is FreeBSD based, means that I'm very comfortable tweaking and installing things. If something goes wrong, I know how to fix it. I never took this into account when I wrote the other post 4 years ago.
Exposing this set-up to the internet is something I'm not looking forward to and that's why I haven't done that yet. I want to research how to expose something from my home network to the internet a bit more first. I want to be sure that I've at least taken basic security measures to make sure my data and network are safe. So if you have any tips, besides using SSL because that's obvious, I'm very interested to hear what your solutions are.
This whole experiment was very nostalgic because I went through a lot of the things I went through 4 years ago. And the fact that this post relates to the very first blog post I've ever written was surprising. I long thought that post would be a dead-end, but as it turns out it was just a very long and slow journey. In the end, I ended up with a Raspberry Pi 4 4GB with an 8TB external hard drive running Nextcloud on my home network. I can now move all of my data to that disk and keep the important data as a second backup on a separate external hard drive. This all relates to the saying that if you have the data once, you have none of it. So far I haven't exposed this set-up to the internet yet, because I want to make sure I'm being safe with it before exposing myself to all kinds of malicious traffic.
]]>If you've been reading my blog posts for a longer time, you might remember that I'm working on a CMS, something I never thought I would do. In this post, I will go over my reasons why I'm building my own CMS again because they've changed a little bit since the last time I wrote about it. This post is a little different than my other posts as I usually pick a topic and do a deep-dive. This is more of an announcement and reflection in one in the past few weeks while working on Aloia CMS.
I've published a short announcement on the Aloia CMS website where I explain that the first stable version is not far away. You can read about "The road to version 1.0" on the Aloia CMS website. This post is not about the internal changes of the CMS, because frankly, they're not all worked out yet. Over the past few weeks, I've worked on the project quite a lot because there were things that annoyed me about the internal structure and expandability of the CMS.
Over the past few weeks, I've worked at making the CMS less of a headache for myself. I claim the project is all about developer experience and relieving the headache of dealing with content when working with a CMS. If you never look at the source code, the CMS is quite nice to use, until you want to update the content. Retrieving data is simple and straightforward. This all comes from the fact that the CMS was designed to convert Markdown files to PHP classes and HTML code. That part of the project received a lot of attention because I've been building websites on top of the CMS. The quicker I can create and maintain websites the better. Recently I've been looking at adding features to some of the websites that run on the CMS and came to the conclusion that it's going to cause a lot of headaches.
The included content types (Page, Article, Content block and Meta tags) dictated everything in the CMS and were wired throughout the code. This allowed me to very quickly create websites that dealt with these content types. On the other hand, adding new content types meant I had to add more code to the CMS itself. There was no way to extend the base functionality from the CMS and create your own implementation in your projects. This was never a problem because I didn't need to do that. But now, with the planned features on some of the websites, I need a way to extend the base system and create new content types in the systems that use the CMS. This is where the first stable version comes in: get out of the way and allow the developer to extend the CMS if needed.
Another reason to go for a major version change is to be able to apply some breaking changes. Currently, you need at least 2 files to be able to display an article (like this one): a config file, and a markdown file with the content. This was initially done because of two reasons: 1. keeping content and metadata separate and 2. I didn't know front matter existed. It wasn't until I started to work with Jekyll for the Aloia CMS website that I found out what front matter was and how powerful it was. This helped me realize that keeping content and metadata separate was a terrible idea. The metadata is about the content and only that content. If you were to see it as a database relationship it's a 1 to 1 relationship. There really isn't a point in keeping it separate. Being able to combine these two into a single file allowed me to create a very extensible system. All content types now have exactly the same structure: front matter and content in a single file. This means I can create a base layer that parses the front matter and keeps it in memory as attributes and keep the content separately.
The included content types are now all extending the base layer, which means adding new content types is as simple as adding a new class to your system and pointing it to a folder. From there you can interact with the content types however you want: retrieving and updating/saving is taken care of for you. The fact that everything extends the base class makes maintenance much simpler: a change in one part will be applied to everything.
In an earlier post "Why I built my own CMS" I described why I built my own CMS and why I never expected I would do that (again). Most developers, including myself, find building a CMS very tedious because it's a boring task since it's often a lot of copy/paste work. I've now changed my mind a little bit, as most of the copy/paste work is either done or isn't needed for this project. Since my CMS doesn't come with a user interface I don't really have to do a lot of copy/paste. When starting this project, when it wasn't a standalone CMS yet, I never intended to build a full-blown CMS, so I never had that negative mindset when developing the project. Now that all content has the same format, I let the code work for me and not the other way around.
I realized when working on these recent changes and getting ready for the 1.x release, that I'm creating a product that works perfectly for my workflow. Any time I can do something to improve my workflow and get on with my day, I'm enthusiastic about making the change. That's also one of the reasons why I work on Linux 95% of the time: it does what I need it to and doesn't get in my way. The CMS is built to be painless and take away the burden of managing content, so if I can help myself and others achieve that goal I'm happy to do it.
I've been working on making my CMS work for the developer, rather than the other way around. I've made a lot of changes to allow the developer to shape the CMS around his or her workflow and be easy to extend and interact with. My primary goal for the CMS is to make managing content a breeze and to just get out of the way. The content is the most important part of the CMS, not the bells and whistles. If you want feature-rich content management systems, there are plenty of them out there and I'm happy to point you to some of them. Aloia CMS is for you if you want a lean content management system that integrates into your Laravel project, manages your content, can be extended with your own features, and just doesn't bother you.
]]>
Earlier this week (January 27th, 2020) I published my first public Golang package. I don't know if it's any good and if it's even structured as it should be. Nonetheless, the package has already helped me work through a lot of challenges. So why did I publish it and what are my intentions with it? Let's dive in!
One of the most important aspects of Golang is the use of packages. If you're trying to do anything in your script, you will need to import packages to provide the tools you need. I thought it was necessary to find out how to create packages myself as packages are one of the most important things to understand in Golang. This way I can learn how to distribute packages and how to extract parts of the source code and use it in many projects.
I've been developing PHP and JavaScript applications for close to five years at this point. During this time I've distributed and contributed to about a dozen PHP packages and one JavaScript library. There was quite a tough learning curve for both because there are all kinds of things you need to think about when developing these packages. On this aspect, I can only comment on PHP packages, because I'm still very confused about developing and distributing JavaScript libraries. So when it comes to PHP I needed to set up all kinds of settings in Composer and it was very intimidating. Of course, once you get the idea it's very simple and it's exactly like building a web application. Developing PHP packages locally is still a mystery to me. I'm going through hoops to create symlinks in folders to be able to see my changes on the screen. When the packages don't have a web interface, I only write unit tests to verify it works. In summary, you go through a lot of hoops to develop a simple package.
So what is this process like in Golang? Two words: A breeze. Developing a package in Golang is by far the easiest process I've gone through with any language and framework. What a relief! When you're developing your application and want to structure your code into folders, you're already forced to create a new package. Every folder is its own package. These packages expose a certain set of functions and the rest is all contained within the folder. This means you can check that specific folder into Git and push it to GitHub. That's it. You can use this package locally as if it's in the same project as your other code. This means you don't have to create symlinks or go through any other hoops. When you're done with your package, push your changes to GitHub. Now you can import your package in any project you're working on. Let's use my first package as an example:
GitHub URL: https://github.com/roelofjan-elsinga/dates
With this URL, you can run the following command in your Golang workspace:
go get https://github.com/roelofjan-elsinga/dates
and import it into your project like so:
import "github.com/roelofjan-elsinga/dates"
and that's it, you're done. I was so surprised to find out this was all it took to publish my sub-folder as its own package.
Three weeks ago I started my deep-dive into Golang because I was trying to solve a business goal. If you missed it, you can read more about "The impact of migrating from PHP to Golang". When I wrote this application, I created quite a few packages within the application, as this was a large process. Earlier this week, on the day I published this Go package, I started a second application to solve another business goal. In this second application, I faced some of the same problems I had in the first application. But the difference was that this time I had already solved the problem...in another application. This was the perfect opportunity to create a package. I followed the steps above and imported the package into the new application ten minutes later.
When I heard that Golang is great for developer productivity I dismissed it at first, because there are so many other tools and languages out that that claim this very thing. I was convinced it was great for productivity when I went through this process. I have never created a fully-fledged package and imported it into a new project quite this quickly. I've worked with PHP for 5 years and Golang for 3 weeks, but creating a package in Golang was much faster and easier. Don't get me wrong, I'm not putting PHP down. PHP is and will be my main language. I've gone through the bad times (PHP 5.2 - PHP 5.4) and the wonderful times (PHP 5.6 - PHP 7.4) and I will most likely stick with it for many years to come.
I will maintain any package in any language I've published, so the Golang packages are no exception to this. I will also publish more Golang packages when this solves a pain I'm experiencing. This is no different than the PHP packages I've published so far. These were pieces of software that I was using in many places and I didn't want to maintain several instances of the same software. The packages that I've published and will publish in the future are all about scratching my own itch. If I don't experience a problem myself it's very difficult to write a solution for it and then distribute it. The next time I experience a problem in one application that I've already solved in another, I will create a package for it. This way I benefit from my earlier efforts and I'm able to give back to the amazing communities that have helped me for the past five years.
I wrote my first Golang package because I had already solved the problem I was facing in another application. I didn't want to maintain two instances of the same software and decided to publish this package. I don't know if it's any good, but it has already helped me solve many problems. When I went through the process of publishing this package I found out how simple it really was to create and distribute this piece of software. If even someone with as little experience as me can publish a package, then the creators of the language did something right. This process taught me a lot about the workflow for this new language and has convinced me to continue building and distributing packages.
If you're interested in having a look at the package I wrote you can find it on GitHub. While you're there and happen to know Golang yourself, I'd love any contributions and feedback. I'm very new to this language and want to learn the best practices. I'm also very interested in performance improvements, so if you have any suggestions for me, let me know! If you want to discuss this post, you can reach out to me on Twitter. I'd love to hear from you!
]]>
Recently I migrated a large business process from PHP to Golang for several reasons. You might not expect this, but I didn't make this choice without good reason. I didn't do the switch to learn the language. Before we get into the reasons for migrating the process from PHP to Golang, let's get into what the problem was that I needed to solve.
In an earlier blog post, "Struggling with micro-optimizations on large scale data processing", I described that I was struggling to make a process perform better. This process was the indexing of data documents into Apache Solr. The Solr server is amazing and performs well, but generating the data documents was the bottleneck. This process requires me to make anywhere from 1000 to 10000 calculations per entity. There are 12000 entities that need to go through this process, 1 calculation took about 20ms. This isn't terrible by itself, but having to do this 1-by-1, it adds up. To be able to generate records for all entities would take on average 5000 calculations 12000 entities 20ms = 333 hours and 20 minutes. This is unacceptable, as this needs to happen at least once every 24 hours. I was at a loss at how to solve this until I encountered Golang.
In short, these were the problems I ran into and needed to find a solution for:
When I encountered Golang, I was very overwhelmed. The enormous amount of data types compared to the handful found in PHP was hard to understand. It wasn't until I watched a few presentations that I knew what Golang was capable of doing. The main feature I was looking into to help solve the problem was concurrency. Doing 1 calculation at a time was too slow. Being able to do 8 calculations (8 threads on the CPU) at a time would, at least theoretically, improve the performance of this process by about 8 times. This would get the runtime down from 333 hours to "only" 42 hours. You still can't run 42 hours worth of calculations in 24 hours on the same hardware, but there were potential improvements.
Another advantage that I was looking for right away was the fact that Golang is a compiled language, which means it's compiled from human-readable code into binary code. This is able to run "on the metal" in both my development environment and on the server. I reasoned that being able to do calculations at native speed would improve my code a lot. But, I had no benchmark to know how much it would improve the speed of the calculation. To make this simple for me, I would be happy with a 3x execution time improvement. This would take the total runtime down from 42 hours to 14 hours. This would mean that the entire process finally fits within the required 24-hour execution window.
So why Golang and not something like Java or C? Because I had more experience with Golang and I've heard many great stories of fellow PHP developers who've managed to learn Golang with relative ease. This was enough motivation for me to take a deep dive into this new language. A new language to me.
When migrating this large process to Golang, the first goal was to find a starting point. My team and I developed this process, refactored it and added onto it for 4 years. So I'm sure you can imagine how big this task was. I created many different ways of interacting with this process over the years. This helped to reduce code duplication, but it made it very difficult to change any code. Picking a starting point was difficult, but once I found one I could get to work. The starting point was calculating a price for a start date and an end date, the easiest scenario I could come up with. It took me 4 days to migrate this process, including tests for everything. After the 4 days, the process worked well and was fast, but there was a major bottleneck.
The first version of the migrated script was a Goroutine that would execute when the webserver received a request. This was a problem because according to my calculations I would need 60 million calculations, which means 60 million API calls. Everything was running on a single machine, so at least the internet wasn't the bottleneck here, but the local network was. It took 15ms for PHP to create a request, send this to Golang, which took 0.2 ms to 8ms to do the calculation. This meant I had about the same execution duration (an average of 5ms + 15ms = 20ms) as only using PHP. At this point, I wondered if I wasted 4 days building something that didn't benefit me at all.
The solution came with the realization that I was still doing 1 calculation at a time: 1 API request at a time. I was using Goroutines and channels to calculate prices much quicker, but I was still doing the separate entities synchronously. I decided to move an earlier step in the process, where I generated a list of dates to calculate prices for, from PHP to Golang. This way I could calculate prices for all required dates concurrently. This increased the execution time in Golang to about 1 second, but it meant I only needed one request per entity. I could now calculate all prices for a single entity in 1 second, when this was 5000 20ms = 100 seconds before the migration. With that change I cut down the total process execution time down to 12000 entities 1 second = 3 hours and 20 minutes. Keep in mind that all these numbers are very rough averages.
Now there is one obstacle left: I still need to make 12000 API requests. Ideally, I would only make one request, but I realize this is overkill. If that were the case, I should move the entire process to Golang, never needing PHP. This is an option I'm looking at, but I won't do it for now. I was able to cut the process down from 333 hours to about 3.5 hours at the lowest and that will do for now.
As I showed in the previous section, the changes would reduce the execution time would by 99%. I would like to clarify that I can't attribute all these performance gains by only switching to Golang. Throughout the process of rewriting the processes from PHP to Golang, I've improved the architecture and code itself a lot. The PHP code was so difficult to change in some places due to a large number of dependencies, that it was doing unnecessary calculations. It was, for example, translating things that were not relevant to price calculations. I needed these translations in different parts of the application, so the entire process was doing too much work. When rewriting this in Golang, I removed all these things and only left the part of the code that was responsible for calculation prices. So keep in mind that streamlined processes have something to do with the performance gains as well.
The impact of Golang was incredible nonetheless. The original process used anywhere from 60-70% of the CPU resources on the server. The Golang threads only took up 0.2-2% per thread (1.6-16% in total for 8 threads). So the resource usage and the execution time were much lower. The low resource usage also meant that I could increase the times I run the process per day. This was about once per week in the prior situation and is now several times per day. The servers used to run out of memory every single day and required manual restarts: this is now a thing of the past. The server doesn't run out of memory anymore and is now doing more than it was before.
In short, these are the solutions Golang brought to this process:
Switching one of the main business processes, calculating prices and indexing these into Apache Solr, used to be a major headache. The process was slow and used a lot of server resources. By rewriting this into Golang and streamlining the internal processes have improved the execution time by 99%: from 333 hours to 3.5 hours. By leveraging the built-in features of the programming language, I was able to rejuvenate this process in less than 2 weeks.
The lower resource usage of Golang meant that the process went from using 60-70% to 1.6-16% of the CPU. This, in turn, helped to stabilize the processing server and it hasn't run out of memory since I published the new process. This used to be a daily issue and now hasn't happened a single time in 2 weeks.
Leveraging the fact that Golang has built-in testing tools, the process is also completely covered by tests. I used these testing tools to build the Golang program through TDD. These tests help me sleep at night, knowing that the script does exactly what I intended it to do.
So would I recommend this? Well as always: it depends. If you're running into issues that you can solve by cleaning up your code, then it's not worth it. But if you need the concurrency and the native performance and want a simple language to help you achieve this, then Golang is a great choice.
You've reached the end! Thank you for reading this far, I appreciate your time and patience. If you have any questions or feedback for me, you can reach me on Twitter. I'm happy to talk to you about this more!
]]>There are many, many Linux distros out there and it's both amazing and confusing! It's going hand in hand with the open source way of thinking: Don't like it? Fork it and improve it! That's exactly what the enormous amount of Linux distros stands for. People found a distro they liked, but something wasn't right, so they changed it and made it into a new one. This is both the greatest and most confusing aspect of the Linux desktop. It's confusing for the newcomers because they have no clue where to start on their Linux journey. I'm writing this post to explain one aspect of the Linux ecosystem: rolling releases and snapshot releases. What are the differences and which one should you use?
A rolling release distro is a distribution that continuously updates individual software packages and makes them available to its users as soon as they're published. This means that you as the user of the distro always have the newest version of the software installed. It means you get to enjoy new features as soon as they're released. It also means that many things could break if you haven't updated your system in a while or if incompatibilities are introduced between software packages. The goal of rolling releases is to get updates to users as quickly as possible.
A great example of a rolling release distro is Arch Linux and all distros that are based on top of it, like Manjaro or EndeavourOS.
A snapshot release distro is a distro that's being released every few months and contains heavily tested and verified software packages making sure everything is stable and "everything just works". This also means that available software is usually a few versions older than the newest version. These older versions are stable and are guaranteed to work. That's the main goal of snapshot releases: stability. Packages are usually upgraded for every new version of the distro, which could be anywhere between every few months to every few years. When you're on a specific version, you can expect everything to work. The downside is that you won't have the newest version of the software you're using.
A great example of a snapshot release is Debian and all distros that are based on top of it, like Ubuntu.
So now you can ask the question: Which one should I use? That's a great question and gets a boring answer: it depends. It all depends on your needs. If you're always working with the newest software and don't mind to deal with any bugs if it means you can use the bleeding edge of software, go for a rolling release distro. If you want your computer "to just work" and don't mind being a few versions behind the latest version, a snapshot release is perfect for your needs. Both types of distros have their advantages and disadvantages. It's up to you to decide what you prefer and would like to work with on a daily basis.
There is a small side note when you want to go for a rolling release distro and that is that you should probably not go for one as your first Linux experience. You're likely to encounter bugs when using a rolling release distro and unless you already know how to work around or fix some of these bugs, you might have a hard time using a distro like this for a longer period of time. A snapshot release is a better choice when you're new to Linux. You can get your feet wet in the world of Linux without too much risk of ending up with a broken system. Once you've encountered and solved some bugs in your snapshot releases, you're ready to work with a rolling release distro if that's what you want to do.
Here are some great examples of distros you can use for both types:
Rolling release distros:
Snapshot releases:
Now that I've explained the different types of distros I will get into a real-world scenario and tell you what I'm using on a daily basis and why. For my home system, I use both a snapshot release (Ubuntu 18.04 LTS) and a rolling release distro (EndeavourOS). The reason for this mix of release types is that the system with the snapshot release used to be my work system. This is something I will get into in the next paragraph. The other system uses a rolling-release distro. I've worked with Linux for 3 years at this point, so I'm quite comfortable with the terminal and fixing any bugs in my system. I'm also a person who likes to have the latest version of the software, to take full advantage of new features. So the only logical choice to me was to try a rolling release distro. The reason I specifically went with EndeavourOS and not with Manjaro is resource usage. EndeavourOS is a very lightweight distro and I'm running it on an old laptop, which now works perfectly again.
For work, I use a snapshot release distro: Ubuntu 18.04 LTS. I've chosen this because I need everything "to just work". At work, I don't want to spend time on fixing my machine when I'm writing and running code. Some software might be out-of-date, but by adding packages through "Snap" I'm still able to install the latest software that automatically updates itself. By using snaps, if a software package breaks, only that package breaks, nothing else. I can downgrade it to a lower version until it works again. I don't think I will ever use a rolling release distro for work unless I become a rolling release distro ninja.
I've explained the difference between a rolling release and a snapshot distro and I've given you my real-world implementation of these types of distros. Now you get to decide which one you want to use for your Linux journey. Do you want the latest and greatest and don't mind getting your hands dirty? Go for a rolling release distro! Do you want everything "to just work" and want the distro to be stable at all times? Use a snapshot release distro. That's the beauty of the Linux desktop: You get to pick exactly what you want and what you need. The freedom the Linux ecosystem gives you is truly remarkable and it's one of the reasons I've stuck with it for a few years now.
If you have any questions about this topic or if you're just looking for a distro that would fit your wants and needs, reach out to me on Twitter. I'm more than happy to talk to you about Linux!
]]>
When building a product it's often top priority to get a first version out of the door. When testing out an idea, getting a prototype in front of people is the most important part of the process. Without a product to get feedback on, you can't improve the product any further in a meaningful way. So when is this feature or prototype done? That's what I'm trying to define in this post. Keep in mind that this is a new topic for me and I'm not a product manager in any way, so take this with a grain of salt.
Before starting work on a product, you should define what "done" means first. This might seem like a strange first step. You're not sure what the customer is looking for yet after all. But this step is an important part of the process. You define a version 1.0 to avoid scope creep, which means that you're adding more features than intended. Scope creep can delay a "final version" and you should try to avoid this at all cost. You need to outline the very least amount of features for the product in order for it to be useful for the customer. These features will most likely change over time, but you need a base from which you can gather feedback. So come up with a few basic but essential features and build your product.
When these features are functional, you're ready to put the prototype in front of users.
After you've defined basic features, you can start to build them. Most software developers, including me, love to over engineer these features. I urge you to try your best to keep these features as basic as possible. These features are here to proof a concept and will very likely change a lot after user feedback. When you've built a fully-featured product, changes become more difficult to apply. When you've kept the product nice and simple, you can make changes to and add and remove features.
So keep the features simple, both in design and function. The best way to test features is to leave out 90% of the details and make it very obvious to the user what something does. You can always extend functionality and customization. The goal of the first version is to get users familiar with the potential capabilities of the product. They shouldn't have to deal with configuration settings in a menu yet. It's best to not allow for any customization until there is a need for any of them. Keep it simple for yourself and the users, until they need more.
Optimizing features is one of my favorite things to do, but you can't do this if you have no idea what the user needs. Only optimize when you have enough information to make decisions. This way you can actually measure your optimizations. Without measuring your changes, you're making changes without knowing the impact. This could actually make the product worse, not better. But, if you're not measuring anything, you won't know how the changes impact the users. The only way to find out in that situation is to ask them for feedback.
It's very tempting to use Google Analytics and start measuring anything and everything. Don't do this. You need to define very specific things to measure, otherwise you end up with data that could be completely irrelevant. This data could make it very easy to make decisions based on information that doesn't actually represent real world behavior. In fact, I wouldn't even recommend using Google Analytics for very specific products and processes if you're not sure what you'll use the data for. Google Analytics is amazing for static websites to figure out what users do, but using it for more complex processes might be too much work. Let me explain my reasoning on this.
I would try to build something that's built into your product to measure very specific things. This way you know exactly what you're measuring and why. This is where the real benefit of measuring and gathering data comes from. You can do these things in Google Analytics through the "dataLayer" object as well, but again, this might be more work than it helps you. This step depends on what you know how to do and want to do. In most cases, I like to build measuring into the product. That way, I can reuse the measurements and observed behavior for more things than tracking users. Building this into the product serves more than one purpose, which makes it worth it for me in the end.
You can use an event like "user viewed product X 3 times" to track behavior and then send an email to that user with some extra information. Afterwards, you could add the product as a favorite within their account. Building this into the product itself, makes extending any future steps very easy. If you were to use Google Analytics here, you can mark events, but you don't get any extra use out of it.
When you've gathered your metrics, you need to figure out how you want to optimize your product. It's important to find optimizations that are quite simple to install and which benefit you the most. You want to make a lot of impact with the least amount of effort. These types of optimizations are worth your time. You can use this technique to focus on specific improvements and all changes you need to do. The more difficult a change is to make and the less impact it has on your conversion rate, the lower the priority.
This concept, is my main point of this post. A lot of teams skip this part of the process and will focus on what's the most fun to build. The effort you put into implementing optimizations might not be worth it. But you will never know if you don't look at the effort and benefit levels with your team. The team needs to agree on the priorities, because if not, you won't be able to focus on getting the best conversions.
On the technical side, when the whole team agrees on the priorities, it's very easy to come up with a list of improvements. You can motivate the team with some exciting features to build in the middle of a "boring" conversion funnel. I call it boring, because most developers don't care about marketing and conversions. Even if the primary goal of the new features is to raise the conversion rates. They may not care, but they're necessary members of your team to realize your conversion goals. So if you give them exciting technical features to build, they'll be excited about something they don't care for. This is all speaking from experience, because I used to be one of those developers. Luckily I like conversion optimizations now and this is because of this approach. The technical challenges have contributed to my excitement about making conversion funnels better.
After you've implemented your improvements, it's important to keep an eye on measuring your metrics. You want to be able to see if the changes you made actually improved your conversion rate. If it hurt your conversations, you might consider reversing the changes and trying another approach. Again, if you're not measuring, you're making changes based on thin air. These changes could be beneficial or make the conversions worse, but you won't know this if you don't measure anything.
Building a new product or new feature is a lot more work than building it. Before you start, you need to determine what to make and what it should be capable of doing. You should record these features somewhere, because they will avoid scope creep in the future. Scope creep could delay any new product or feature, so this is something you need to avoid at all costs. When you've got a basic version of the final product, you need to come up with some specific metrics and ways to measure these metrics.
The metrics are the leading objectives when improving a product. Measuring the impact of changes throughout the development process will allow you to adapt and improve. When you've got the data, you can use this to focus on specific improvements and features. Choose those tasks that are easiest to build and make the biggest impact to your product. These are the most important tasks and help you and your team to focus on the task at hand. When the whole team is on the same page, improving the project becomes a breeze. If you ever forget what to do next, let the data lead the way.
This post is about something I'm still learning a lot about and I'm very new to it. This is an attempt to understand certain workflows better and improve on my development process. So if none of this makes any sense, please reach out, because I'm here to learn as well. You can contact me about any of this on Twitter.
]]>
You get tracked on most websites, your information and behavior aren't yours to keep anymore. This is the sad truth of the modern internet. I admit I'm guilty of this too, in fact, you're probably being tracked right now. If you're not taking precautions, your data will be saved somewhere and some people will try to use this data to sell products and services to you. If you're upset about this, you understand the magnitude of this problem. This problem can be visualized for those that don't see the big deal quite easily. That's what this post is about. I'm going to tell you about the steps I've taken to take back my privacy, but also the steps I will take to keep your information anonymous if you're visiting any of my websites.
Back in the day, when you watched TV, you saw advertisements. These advertisements were for the masses, they catered to anyone and everyone, but no one in particular. These were advertisements that were broadcasted to anyone watching television. These advertisements were annoying at times but didn't feel personal. They're anonymous and designed to appeal to a large audience, but no one in particular. They were plain and most likely never really stuck in your head. The fact that advertisers didn't know who you were as an individual didn't matter because they were aiming the advertisements at a large enough audience to reach a small percentage of it.
Fast forward to the present, where anyone can track anything you do on the internet. Advertisers can ask any of the big search engines or social media platforms to send their advertisement out to anyone that has looked at a particular product in the past week. They know exactly what you've been doing on the internet and will pick you out specifically to show a particular advertisement. The ads are no longer aimed at a large audience but very specific people. Where 1 in a million people might have bought the product from the television advertisements, maybe this is now 1 in a hundred. This enormous increase in conversion is incredibly attractive to advertisers and will make them want to spend more money to reach even more people. They will quite literally pay to buy your attention because they know your likes and dislikes. They will keep showing you what they think you're interested in until you buy that product.
I said I would visualize this process, so I will. Imagine you're watching TV, where you think you're completely anonymous. Now an advertiser shows up at your house and follows you around for 20 hours per day for years. Now every time you turn on the television, you will see exactly what you want to see and every single advertisement is interesting and looks like something you want to buy. Wouldn't you feel a bit scared? You will feel like the anonymous people on the other side of the TV know everything about you and often even more than you know about yourself. They've built data models of you and can predict what you will want to buy next month or next year. Sure, this might seem convenient, because it takes a lot less effort from your side to access the information that you want to see. But if this happens enough times, you might start to wonder: "Do I trust them with all of my personal information?" I don't, so I'm here to help you to hide in the shadows and send the advertisers on a wild goose chase.
There are several ways to get more privacy on the internet. Some are easier than others and some go very far to achieve a high level of anonymity. How far you go with this is up to you, but any steps to take back your privacy are good steps. I've put together a shortlist of some simple and some more difficult things you can do to stay anonymous:
Switching from Google Chrome to Mozilla Firefox, Brave Browser, or Opera is a great first step in taking back your privacy on the internet. Google Chrome is a great browser and I've used it since it came out. It's been my primary web browser for years and it has served me well over time. But it has gotten a bad reputation over the past year or two. It allows many plugins to track all of your internet behavior and is owned by Google, one of the biggest marketing platforms in the world. This fact alone made me install alternative browsers. One of the first ones was Mozilla Firefox, which I use as my primary web browser at home. Mozilla is a company known to do everything to protect your privacy and has produced a lot of great material over the years about all kinds of things related to the internet. I used Firefox 2 and 3 back in the day, before switching to Google Chrome and remembering it to be a great browser. The only reason I switched was the speed boost that Google Chrome provided for me. Now, years later, this speed boost is gone and Firefox doesn't feel slow to me at all. It might be faster now.
Brave Browser is my primary web browser at work. The reason being is that it's very similar to Google Chrome for web development, but it blocks all trackers automatically. For example, when visiting the CNN website, it blocks 45 trackers when opening a web page. Yes...45. Let that sink in...45 different services are collecting your data. With Brave, all of these are blocked. The second benefit, besides not being tracked is the fact that the website is fast when you can't be tracked. On most websites, I've seen performance boosts of at least 3 to 4 times faster on Brave than on Google Chrome. Since Brave also has an app for mobile devices, I use it on there as well. Since it's so similar to use as Google Chrome, the switch was barely noticeable to me.
The last one on my list is Opera. I've used Opera a long time ago for my mobile phone but never realized they made a web browser for the desktop as well until they came with the news that the new version has an ad blocker and VPN built-in into the browser. This made me instantly download it and now I use it every once in a while. It's a fast browser with some nice plugins to help you get more out of the standard web browsing experience. If you're not following any of the next steps, this would be a great option for you, as the VPN is included.
You may have seen people talk about DuckDuckGo in the past year because it's gaining popularity quite quickly. DuckDuckGo, unlike Google, doesn't use your online search behavior to sell your data to advertisers. DuckDuckGo (DDG) does sell advertising spots though, but these advertisements are only based on what you're currently searching for. This means that the ads may be less relevant, but at least your data isn't collected to sell as advertising data. Additionally, DDG doesn't track your behavior at all. This way you can be sure you're anonymous and still find great information on the internet.
In most browsers, you can choose which search engine you want to use. So if you change this from Google to DDG, you will instantly benefit from not being tracked anymore. I've done this on my laptop and phone and didn't notice a big change. They're both search engines and look very similar, but one of them doesn't track you and the other one does.
There are a few concerns some people have raised about the way Google prioritizes their search results. These have to do with the fact that Google doesn't only filter the results on being relevant to your search terms, but also relevant to your interests. This means that the search results you see in Google will differ from person to person, even if two people have used the same search terms. This helps to get the most relevant search results for you. A lot of people like this feature, because it means that you never really have to go to page 2. This is not the case for DDG, because no matter who searches, you always get the same results for the same search terms. This could mean that you sometimes have to go on the second page, but not often. It's just a thing to keep in mind when making the switch.
A lot of browsers can block trackers on websites. There are several guides on how to do this for different browsers. By choosing one of my suggested browsers from the first part of this post (Mozilla Firefox, Brave Browser, and Opera) you won't have to do anything at all. These browsers automatically protect you against most trackers. There are ways to do this in Google Chrome as well, but these require third-party plugins and those are not always trustworthy. The best solution to block these trackers would be to use a trustworthy browser, like the ones mentioned before.
A way to block advertisements in Google Chrome is to use Adblocker. These prevent showing your advertisements, but won't protect you against being tracked. So in reality, you're still being tracked but it's hidden. You won't see the results of being tracked, as these advertisements are physically removed from the page you're currently viewing.
If, after all of these precautions, you're still not satisfied with the results and want to be completely anonymous, there is another step you can take. To be completely anonymous on the internet, you can use a trusted VPN. A VPN is a Virtual Private Network, this means that any data traffic is being routed through the service to the destination server. The website you're visiting will think that the traffic it's receiving will come from the service you're using. If you want to read more about why I'm personally using a VPN, you can read more at "My thoughts about using a VPN during everyday life".
A good VPN will hide who you are and where you come from, so you're anonymously browsing the internet. There are a lot of good examples of VPNs that are reliable and fast. Here's a list of my recommendations:
There are more, but I've either used these or know people that use these daily. So I can at least say with certainty that these VPNs work well and are reliable.
There is a little caveat when it comes to VPNs and that's the following: There are VPNs that are installed on your machines and there are those that only work in the browser, like the one in Opera. This is a crucial difference. The VPNs that are installed in your browser will only route the traffic from that browser through a VPN, nothing else. A VPN that's installed onto your system will route all traffic through a VPN, including the traffic through your browser. This is something you should consider when looking at a service to use.
It's a scary thought, having someone look over your shoulder at everything you do on the internet, all the time. If you're fed up with this and want your privacy back, there are ways to do this. In this post, I've given you four steps you can take to gain more privacy on the internet and you can follow all or just some of them. Any steps you take improve your privacy on the internet and help you to stay ahead of trackers and people who are buying your data to figure out who you are and what to sell to you. You should be able to use the internet without having to give up your data in exchange.
If you support the work of Mozilla Firefox, Brave Browser, and Opera in trying to keep your protected on the internet. Consider a donation to help further their cause. They're doing everything they can to educate the public about what it means to have privacy on the internet and their products (the browsers) show this.
If you have any additions to this post, I'd love to hear from you on Twitter.
]]>Blogging is an amazing thing to do for software engineers. I like to write blog posts for a lot of reasons, but a few of those reasons are more important than others. This is why I've created my top 10 reasons why software engineers should start a blog themselves.
Before I get started, I'd like to point out that I've included a few disclaimers at the bottom. They're about mental health, so make sure to read it. They're at the bottom of this blog post, above the conclusion.
Now follow along as I write about the reasons why I love blogging and why any software engineer should start a blog. The top 10 goes as follows:
I'll go into detail with each of these reasons, so you can read what I mean with them.
Every once in a while I have a day where I feel like I don't know anything about programming. This is a difficult situation to be in, because how do you tell yourself that you do know what you're talking about? Well, there's comes "the blog". I started blogging in 2016 and I still remember writing that first blog post. On the difficult days, I can go back in time and see what I was up to. I usually go back to a post published six months ago and see what I was struggling with then. When I read the posts I often time feel silly that I ever got stuck in that situation, because "Now I know so much more". The imposter syndrome disappears because I proved to myself that I made a lot of progress in a short amount of time.
As software engineers, we're reliant on the community of Stack Overflow or any other platform to help us with problems. We use our favorite search engine a few times per day to figure out how something works. We use the community, so I'm trying to give back to this community with what I learned as well. I write a blog post and publish it if I run into a problem, solve the problem, and believe others could use the solution. The most recent example of this is How to fix CORS headers in a single page application.
Juggling ideas and large concepts in our head is a very common occurrence in our daily lives. Instead of keeping everything in your head and risk getting distracted and losing ideas, you can write it down. You can write down your ideas, apply some minimal formatting and grammar rules and call it a blog post. This way you also have a place to point people when you discuss your ideas with colleagues. They can read about your ideas when they get a minute. They'll appreciate the fact they're not bothered during their work because you'll lose the thoughts if you have to keep it in your head any longer.
Often, when you're working through a tough problem and you start to offload it into words on a screen, you end of solving your problem. When you're writing and formatting your text, you need to rationalize your wording and this often helps a lot when solving problems. This has happened to me on many occasions and it's quite satisfying.
Writing blog posts is fun! Fun should be your top priority when writing blog posts. It's not at the number one spot here, because it's not the most important reason why a software engineer would benefit from blogging. When you're having fun writing blog posts, you'll start to use it more often to help you out. Be it sharing knowledge, recording your progress, or offloading your thoughts, when you enjoy the process, you will choose it. When you stop enjoying writing blog posts, you'll be very likely to choose something else to help you. So when you want to start blogging, make sure it's something you enjoy doing or at least learn to enjoy it over time. At the time of writing (December 2019) I haven't missed a single week of writing a blog post since August. The primary reason for this is I enjoy the process.
When you're sharing your knowledge with the community, you'll start to notice some people engage with your posts. These are some great people you could talk to about some topics. They're enjoying your blog posts, so you can exchange ideas with them and learn from each other. They might have other problems that you know the solution to, in which case you can write blog posts about it. Meeting other software professionals is always a good idea anyway because you can help each other in the future. So being able to connect with them now gives you a benefit, since they'll hear from you through your blog posts. They'll know your struggles and might have solutions.
When other engineers understand what you're struggling with from your blog posts, they might be able to help you with some feedback. If the feedback is not about the problem itself but about grammar and/or spelling mistakes, you've still improved. By writing blog posts, you have the opportunity to learn from others and become a better developer. You will also become a better writer. Being a software developer and a good communicator is a very useful combination. People with great communication skills are very useful team members and managers. If you're writing and improving your skills, you'll help yourself by becoming an even better colleague to work with.
Most of us had to go through an interview process of some sort and show off our programming skills and conduct a technical interview. You might be able to escape this part of the process if you've shown off what you know and what you struggle with over a longer period. Displaying your skills can only help you, as it shows others how you can help them. But it also shows yourself what you can improve on. I like to display my skills to show others how I can help them, but also because I can record my progress. I like to be able to see what I was working on a month ago or a year ago. It helps me to focus on the future and see what I could improve on.
Software engineers are often seen as these closed-off people who hate speaking with people. Being able to communicate well with others will break this stereotype and will make you stand out from the crowd. The other benefit is the fact that you're practicing selling ideas and concepts to non-technical people. Imaging trying to sell Docker to your management without being able to put it into simple words. Management will never allow you to spend time on this because they don't see the benefit of it. All they see is a complicated mess that takes a lot of time to set up. Your co-workers are more likely to listen to you if you're able to break large concepts down into normal words. Bonus points if you can explain the advantages and challenges. These are all skills you're developing with blogging and talking to your co-workers.
I like to be someone that knows what he's talking about and I know you do as well. So why not show others? If you write 10 blog posts about JavaScript, readers will think that you know JavaScript. Instead of insisting that you know JavaScript, they can now see that you do. This is what I'm talking about when I'm saying that blogging helps you to establish yourself as an expert. The more you talk about a topic, the more people will see you as an expert on that topic. Of course, the content of your posts needs to prove that you know what you're talking about. Once you can do this often, you're setting yourself up for success.
A blog is a great format for keeping others up-to-date. This is especially true if you work with a lot of colleagues. An e-mail is another great format for this purpose. But, if you're like me and don't have your work account on your phone, it's a bit difficult to keep track of these updates. A blog post is much easier, because you only have to share a link and you can let others know what you've been up to. You can post this link in a lot of places, so sharing this is much easier than sending an e-mail.
I'd like to clarify a few things about this blog post. There are a lot of blog posts out there about why you should start a blog in 2019. I appreciate those blog posts because they show that blogs are thriving. But most blog posts I've found list things like: "You can make money with it". While this is true, I left this out of my top 10. When you start this journey with the expectation to make money in a few months, you might lose motivation. When you're "still not making money", you might start to experience burn out. Mental health is very important, so please take care of yourself. Start this journey because you like to try it and you think it might be fun, not because you want to make money from it. Money is a nice side effect, it's not the main goal. The main goal is blogging and keeping it up for some time.
Another thing I don't see enough in similar blog posts is that not every engineer has the opportunity to write blog posts. Whatever the reason is, it's a valid reason and you shouldn't feel bad about it. Blog posts like this are often a source of imposter syndrome. Sometimes they're ways to guilt trip engineers to spend time outside of work to write a blog post. I know this is going on, because I've been through it. If you're one of them, remember this: If you're not writing blog posts you're not any less of a software developer.
Thank you for reading this far! In this post, I went over the top 10 why software engineers should write blog posts. It has a lot of benefits, but you need to start this journey with the right intentions. Only start a blog if you enjoy writing or are willing to learn. Don't go into this expecting to make money from it in a few months. If you don't have the opportunity to write blog posts, then don't feel bad about not writing. If you're not writing blog posts you're not any less of a software developer.
If you enjoy writing, these are some of the benefits you get from it:
What is your favorite reason to start blogging?
You can always contact me on Twitter and ask me questions. If you'd like to know how and why I started, I'm happy to explain this as well.
]]>Windows comes pre-installed on almost all consumer non-mac laptops on the market at the moment. This is fine for most people because it serves a very wide target audience. However, I'm not one of those people. I like to control everything on my computer, be able to delete anything I want to and I hate pre-installed advertisements and bloatware. This is one of the reasons I've gravitated towards Linux based systems. But before we get to that step, I'd like to tell you my history with Windows and why I moved away from it.
Back when I used Windows on my computers, I had to reinstall it every once in a while to get rid of viruses, to replace any corrupt files, and to get a speed boost when launching my operating system. I thought this was very normal and most of my friends also went through this process every few months or so. This was fine for Windows XP and Windows 7, the installation was quite quick and after the installation had completed everything would work with some minimal installations. Everything was good in Windows land.
But then, Windows 8 appeared from the shadows. It was the long-awaited successor of the amazing Windows 7â¦what would Microsoft do to top this? Well, they dropped the ball, unfortunately. Windows 8, I think we can all agree, was ahead of its time and therefore not well received. It was also much slower compared to Windows 7, so it felt like a downgrade instead of an upgrade. Not long after, Microsoft came with Windows 10 and gave everyone who was currently using Windows 7 and Windows 8 a free upgrade. Windows 10 was quicker and much smoother, it didn't get into your way as much as Windows 8. Peace was once again returned to Windows land, well sort of.
Then my trusted laptop, running Windows 10 died. It had the perfect web development setup running XAMPP and everything was carefully crafted to work the way I wanted it. But it was all lost. When I got my new laptop, I had to go through the Windows 10 setup, eager to get into the desktop environment and set up my work environment the way it was on my fallen laptop. The set-up began and didn't seem to end, it took a good 20 minutes between turning on my laptop and seeing the desktop for the first time. I hated every minute of it because I wanted to get back into my workflow, but the operating system got in my way. This is when I decided I had enough of Windows. It was no longer a convenience to me, but a burden.
For work, I had worked on a virtual machine for a while. A virtual machine running Ubuntu. I had gotten very familiar with it and decided I wanted to work with it all the time, not just at work for some specific tasks. I created a partition on my new laptop and created a Windows/Ubuntu dual boot, just in case I ever wanted to go back to Windows. Spoiler alert: I never touched Windows again and ended up removing the installation within 6 months. Ubuntu was my new operating system.
I installed Ubuntu on my new laptop for the first time. It was an exciting moment because I knew I probably wouldn't go back to using Windows. After the installation, which only took about 20 minutes, I could instantly get back into my work. I knew I had made the right choice and quickly installed everything I needed through the command line and got back to work as if it had always been this way.
Fast forward a year and I never felt the need to reinstall Ubuntu because it became too slow or because I had caught a nasty virus. The system still felt as quick and stable as the day I had installed it. At this point in time, Windows wasn't even on my radar anymore. The Windows partition had been wiped to serve as extra storage for some of my web projects. This is also the time where I started to come up with an idea. I was convinced my old laptop was still working, it just needed some serious help.
Reviving my old laptop was no simple task, but I was determined to see it through. After creating a bootable USB and charging the laptop for a few hours, I pressed the power button. And as expected, the laptop turned on. But that was all it did because Windows couldn't find a boot drive, even though the drive in the laptop worked. After this happened, I plugged in the USB drive, restarted the laptop and booted from the USB drive.
The installation took about 20 minutes and afterward I had a fully working system. The installation fixed the connection to the boot drive and I'm still not sure why this ever broken on the Windows installation. I had a working laptop, again after it had been on my shelf for about two years. Ubuntu made the laptop usable again, I could actually boot up in a desktop environment.
Even though Ubuntu was installed and the laptop was able to boot again, it wasn't the best user experience. The laptop was 4 years old at this point and it had gotten slower. It's by no means a low-end system, but it's also not very fast. This meant that running Ubuntu was still too slow for my liking, I needed something that took up fewer resources. This is how I landed upon Fedora. Fedora seemed to perform a little bit better, but it was still too slow. I thought to myself: "There must be a distro that takes low enough resources to be able to run smoothly on this laptop". It took a while, but I have found one: EndeavourOS.
EndeavourOS is essentially an Arch Linux installation that installs the bare essentials for you. This means it also has a graphical user interface for the installation process, instead of the command line installation Arch Linux has. It installs a modified XFCE desktop environment, a file manager, a browser, and a few other essentials. It's distro built on top of Arch Linux that tries to stay as close to Arch Linux as possible. The lack of bloatware means it's fast, really fast. It's the smoothest user experience of any of the other distro's I've installed on this laptop and while using it I often don't even realize I'm working on my old laptop.
The fact that I don't even notice I'm working on a different laptop is exactly what I'm looking for in a distro. The whole installation took 10â15 minutes, it was surprisingly quick. Afterward, I could instantly get it what I wanted to do and didn't have to wait on anything. It really has become a system as I like to see it: simple, fast, and doesn't get into your way at all. The EndeavourOS experiment is not over, because I will continue to use it until it no longer fits my needs. This whole blog post was written and edited from that old laptop. The laptop that was forgotten about and discarded as being broken. That laptop has a new life now with a very bright future.
Thank you for making it this far, I hope you enjoyed reading this post. It was great writing about the past and bringing back all the good and bad memories and I'm very hopeful about the future. If you have any questions about EndeavourOS, you can send me a tweet or DM me at Twitter and I will do my best to answer your question.
]]>When you're developing an Angular application, you'll most likely use "ng serve" to display your application. When you're trying to request data through API calls to "/api/some/resource" you get a 404 response. But why? Well Angular sends the API request to http://localhost:4200/api/some/resource. Because you're not specifying a domain in your services, just a path, Angular will send the request to the current domain, which is fine for development, but will break in development.
This is where the built-in proxy comes into play. When you're using "ng serve", you're serving the application at http://localhost:4200. This means the services will call the API at http://localhost:4200/api/some/resource, however, your API server doesn't exist at that URL and returns a 404 for everything. Your API server is served at something like http://localhost:8000/api/some/resource. By creating this proxy, the development server accepts the requests at port 4200 and sends them to port 8000 behind the scenes. So now you get your data instead of a 404.
This is the config you would be using for the situation I painted here:
{
"/api": {
"target": "http://localhost:8000",
"secure": false
}
}
This config should be placed in a new file called: "proxy.conf.json" and you should place this in the src folder of your Angular project. Next, you need to point to this file in "angular.json". Open the file and search for the "serve" section. Here you can add a "proxyConfig" key to the options. You should end up with something similar to this:
"serve": {
"builder": "...",
"options": {
"proxyConfig": "src/proxy.conf.json"
}
}
]]>RSS (Really Simple Syndication) and Atom (Atom Syndication Format) are two ways to syndicate your content across platforms. A lot of people have heard about RSS in some way or another, but fewer people know Atom. Atom is just a more modern version of RSS, but they serve the same purpose: Sharing updates from a source to different destinations. In this post, I'll explain why you should be using a feed, be it Atom or RSS, for your blog. If you're wondering what the differences are between RSS and Atom, you can read about it on Wikipedia.
Before I continue, it might be a good idea to explain what syndication actually means. In the journalism world, according to Dictionary.com, a syndicate is an agency that acquires content from different sources and distributes that content for simultaneous publication in many different channels (newspapers, websites, etc.). This means a central location can acquire content from all kinds of different locations and then publish that content from a single source to a lot of different places. So to put this in perspective for this blog: This blog contains content from different sources, in this case: me, the writer, and publishes this to a lot of different sources at the same time. So syndication is the process of acquiring and then publishing the content. If you need some more information, the link to Dictionary.com mentioned earlier shows more meanings. Now, let's get into why you should have a syndication feed for your blog.
When you're hosting your blog by yourself, or even when you host it on a website like wordpress.com, you will hopefully publish posts regularly. To reach the maximum amount of people, it's best to post your blog posts in as many places as possible. For example when this blog post publishes, it's automatically posted to dev.to, MailChimp, Pinterest, and several RSS readers. So when I publish a post, not only does it appear on my blog, it will be visible in many more places to reach a much larger audience. This is partly because my audience most likely has no clue this blog even exists, but it's also a convenience for them because they get to read my blogs in the places they already visit. So it's a win-win because my content gets read and people don't have to go out of their way to consume my content.
An easy way for a content creator to reach the target audience is to go to the places where the target audience hangs out. Manually posting your content there could take a lot of time depending on the number of channels you're going through. So being able to do this automatically relieves a lot of pain and saves you a lot of time.
A lot of services can consume and produce an RSS or Atom feed. This makes automation incredibly simple because you only have to update the feed on your blog and all these other services will pick it up. They will then perform some task for you. You don't have to manually tell those services that you published a new post, they will retrieve it from your website. This means that you don't have to do anything yourself when you want to share your blog posts. This is in contrast with sending API requests to other platforms telling them an update is available. You don't have to write any implementation details for sharing your content, but rather, you can use this standardized system to create the feed in one place and then hang back and relax until the other services request the feed and pick up your content.
A lot of people think syndication feeds are this outdated technology people used a decade ago. I used to be one of those people until I discovered it's true potential. Syndication feeds allow you to tailor your newsfeed exactly the way you want it to. Instead of going through a newsfeed that's been created by something like a newspaper, where you see every single news article, you can pick and choose which channels you would like to see. This sounds oddly familiar, doesn't it? It sounds like a social media platform, where you decide who you want to follow and hear more from.
So really, syndication feeds are very modern but get a bad reputation "because it's so old and rusty". Do you know how Spotify, Apple Podcasts, Sticher, and all the other podcast players know which episodes are within a podcast? That's right, RSS. If you've been on Facebook and Twitter in the last year or two, you'll have noticed that you keep missing posts of your friends and people you follow. "I posted this yesterday, didn't you see it?" is what has probably been asked a lot in the past two years. The fact is that no, that person probably hasn't seen your post. This is because news feeds are no longer sorted chronologically, but they go through an algorithm and are sorted by maximum engagement.
The platforms are trying to keep you on the platform, so they're trying to push content you'll most likely interact with. If you're tired of this, you can sometimes change the settings to show the news feed in chronological order. If not, you always have the option to subscribe to a syndication feed (most platforms have them, just use your favorite search engine). This way, the content is always chronological. You have more things to do in a day than to be on Twitter and Facebook, unless that's your job of course. So take control of your news feed, consume some of the newest pieces of content and get on with your day. So in a way, syndication feeds are a great way to break out of the engagement trap and get on with your day.
After reading this far, you might be convinced that a syndication feed is a great thing to have, but now you wonder if it's difficult to implement in your blog. If you're hosting your blog on a blogging service, you most likely already have an RSS or Atom feed for your content. Just look through your settings or use a search engine to find out where to get the link for it. If you're hosting your own custom website, there are open source solutions for this. If you're using PHP and/or my open source CMS, then you can use one of the following packages to help you create your syndication feeds:
In the first two links, you'll also find some examples of what an Atom and RSS feed looks like, so you can always create a feed by hand if you don't want to use any generator or server scripts.
We're here, at the end. I hope I've convinced or at least informed you about using a syndication feed for your blog. It has helped me out a lot already and adding new content to it is an automatic process for my CMS, so I don't even have to worry about it anymore. When my posts get published, they automatically get shared with 4 different platforms and those platforms perform tasks automatically, so I can forget about them. By having these syndication feeds, I can focus on writing blog posts and leave the sharing of content to my CMS. If this is something you're looking for as well, I'd highly recommend to embrace this "old" technology and use it for what it does best: sharing your content.
If you have any questions you can contact me on Twitter and I'll do my best to answer them for you. If you are looking for an open-source PHP content management system, I'd like to direct you to the website: AloiaCMS. You can install it yourself for free.
]]>Event sourcing is a very fascinating concept in programming. I think it could be used as a single source of truth for a wide range of decentralized applications. Event sourcing is a concept that took me quite a while to get my head around because it's very different from the normal way of dealing with data in some kind of database. In this post, I will quickly go over the concept of event sourcing and how it differs from something like a CRUD application. Then I will go over some aspects of event sourcing that could help make it very easy to create decentralized applications, all using a single source of truth to perform tasks.
CRUD applications are standard practice in a lot of places when it comes to developing applications. CRUD simply means Create Read Update and Delete. In practice, this means that you have 4 different ways of interacting with a data object. This makes it very easy to deal with data because you can deal with data in a way that's pretty intuitive. You can throw it away if you no longer need it, create it if you need it, update it when you need it to and read it when you want to display it. It's a very natural way of thinking about something.
Event sourcing only has 2 different ways of interacting with the data if you're thinking in terms of database interactions: creating and reading. In essence, event sourcing is nothing more than appending to an existing state of a data object. Let's go over an example to make clear what I mean. Imagine you have a blog post and you want to publish it. In a CRUD application, you can just modify the post record to set published to true and add a timestamp for the publish date. In an event-sourced application, this is a little different, but not more difficult. When you have the existing state of an unpublished blog post, you can simply record an event: "Published blog post". Your database now contains a command that tells the current state of the blog post that it has been published. You won't need to add a publishing date, because the command already contains information about when it was triggered. This trigger date equals the publishing date.
When it comes to event sourcing, all you need to remember is this: You can only append to the current state of the piece of data. You might now be wondering: but how do you delete or update the blog post? That's simple as well, you record two new events: "Updated blog post" and "Deleted blog post". When you record the "update" command, you can register what the new contents of the blog post should be, all while keeping the old version of the blog post in your database. This is where the single source of truth aspect of the blog post begins.
In a CRUD application, you only know the current state of the application, but you have no clue what it looked like yesterday or a year ago. This is because you're updating the current state to reflect a new state, thus getting rid of the old state. In event sourcing, you're constantly appending new information. This means you can look back in time and see what the data looked like a day or a year ago. This is all great, but how does it make event sourcing the single source of truth? Great question, let's get into that.
The way event sourcing works is that it records an event any time anything happens. This means all events related to a single resource are always recorded chronologically. Since you're only appending to the existing state, it's very easy to share these changes to any other application that wants to hear it.
Let's say you have an existing event-sourced application with a database full of events and you want to create a new application that generates reports based on what happens in the main application. With a CRUD application, you will need to fire events every time something changes. This is fine, but what if you want to know anything about prior changes? Well, you're out of luck, that data simply doesn't exist. With an event-sourced system, the new application can ask the main application for all events related to a single resource. This way, the new application knows exactly what has happened to that resource and the state will always be the same in both applications. When new events are being recorded in the main application, all the new application needs to do is ask for any events that happened after the last event it has retrieved. It won't have to check it's own data, all it needs to do is append to its own state and the data will be synchronized.
This approach of data sharing not only makes the server load lower for both applications, but it also makes the data reliable across all applications.
When you have two applications connected to a single event store (a database containing the recorded events), there is no longer a problem when it comes to data synchronization. To explain this concept, I first need to explain how a resource interacts with recorded events. A resource is called an aggregate root in the world of event sourcing. This sounds intimidating, but it's not as bad as it seems. An aggregate root is just an object that is able to record events and use past events to make decisions about incoming events. Example time!
When an aggregate root receives a command telling it to record a pageview for a blog post, it has the ability to look at all other attributes of that blog post and make a decision. For example: After a single person viewed the blog post 3 times, email that person about blog posts just like it. The aggregate root knows, based on past events, how often someone has viewed the blog post. So when that third view comes in it will record the pageview event, but also "Emailed visitor some related blog posts". Another part of the application, or even a whole different application, can now respond to the new event and email that visitor some interesting blog posts.
Back to data synchronization. An aggregate root will read all past events every single time it receives a new event, this means that when an event was recorded in a completely different application (but still connected to the event store), the aggregate root knows about this and can use it to make decisions about what to do next. Maybe it records a new event, maybe it records two, three, four, or five. It doesn't really matter, because the next time an aggregate root in a different application reads the current state, it will have all of the new events in memory.
This same process is very difficult in a CRUD application, because what happens if you accidentally miss a notification about a new update being made? The next time you're comparing the resource, it might look completely different and you might not be able to tell which one is the correct one. This is why I'm saying that event sourcing is the single source of truth. There is no uncertainty because you can recreate the current state from the list of appended events.
As you can tell, I'm very excited about event sourcing. It's a big paradigm shift, but once you get your head around the concept of event sourcing, you will understand how powerful it really is. If my blog post didn't explain event sourcing clearly enough, there are a lot of amazing resources out there that you can use. An example is this video where Greg Young explains in his words what event sourcing is and when you should use it. Any of his presentations on this topic are great to watch, so go find all of them. I'll list a few:
All I can say now is that you should have a look at this concept and try it out for yourself. I haven't really looked back after working with it for a few weeks now. It has been a really great resource for building reliable applications so far. If you'd like to talk to me about this topic, reach out to me on Twitter.
]]>When you have a website that's either static or doesn't require a database, you have a lot of options when it comes to hosting your website. One of them is hosting your website right out of your GitHub repository through GitHub pages. In this post, I'm going to explain what GitHub Pages is and how you can use it to host your website on the reliable GitHub servers, for free. Yes, that's right, hosting a website on GitHub pages is free, but only when using a public repository.
The hosting on GitHub pages is very simple: everything in the repo can be served to the client. This means if you have an index.html in the repository at the root level, this will be served at the root of the domain. There are exceptions to this rule when you're using some static site generators, but I'll get to those later on in the post.
You can host a website on GitHub pages, even if the repository is private. When your repository is private, there are a few limitations when it comes to hosting it through GitHub pages. As Github describes on their documentation "About GitHub Pages":
GitHub Pages is available in public repositories with GitHub Free, and in public and private repositories with GitHub Pro, GitHub Team, GitHub Enterprise Cloud, and GitHub Enterprise Server.
So yes, you can host a website from a private repository, but you'll need to upgrade your plan to a paid plan. This is quite cheap though, so it might be worth it for you.
You can't host any traditional server-side dynamic websites on GitHub pages, but there are some solutions. One solution is to run a client-side dynamic website through JavaScript. You can load in any JavaScript files into the index.html file and make your website dynamic through client-side routing.
Another way to make a "dynamic" website is to use a static site generator. This sounds strange because you're generating a static website, how can that be dynamic? Well, it's dynamic when developing the website. You can use variables and create templates. For me, this was always the biggest deterrent from building static websites with HTML and CSS. I don't like to copy/paste HTML code and having to edit pages in multiple locations. With a static site generator, you can create templates, so you only have to change things in one location. When you generate the static website, everything will be converted to static HTML and CSS, so you don't have to do think about it anymore.
When you choose to use a client-side dynamic application, you can just upload all assets to the repository and you're done. You can skip the next part in this post and go straight to the part where I show you how to set up the repository to be used as a website. If you want to use the static generator approach, go to the next section and I'll show you what the workflow looks like.
Static site generators are great and there are a lot of them out there. You can use something that's based on JavaScript/React if you're already familiar with those techniques. A great example of that approach is GatsbyJS. It's based on ReactJS and builds a static site for you when you're done. So you can build your entire website in ReactJS as you normally would and then tell GatsbyJS to convert it to an HTML/CSS website. This way you can build a website without having to learn new technologies, and that's very convenient.
If you're familiar with the Twig templating engine for PHP or the Liquid templating engine, you can use Jekyll. I use Jekyll for my projects, but as you can see, it's just a matter of preference. One isn't better than the other, so go with what you like to use.
In this post, I will go over Jekyll, since I know that best and I can paint a realistic picture for you. I won't go over installing Jekyll, because I think the team behind Jekyll had done a great job describing this process on their website. Essentially, Jekyll is an extension of the Ruby programming language, but don't let this scare you because it's simpler than you'd expect. As a PHP developer, I was very hesitant to using Jekyll, because I don't know anything about Ruby and it seemed very intimidating. But if you follow the guides step by step, you will be fine. At a certain point, you won't have to deal with Ruby any more and you get to build your website. So just take your time with it and don't be afraid to make mistakes.
After you've installed Jekyll and you're ready to get started, it's easiest to choose an existing Jekyll theme and customize that to fulfill your requirements. You can find out how to use Jekyll themes by reading the documentation. I tried to start to build from scratch, but that was very confusing as a first attempt and I almost gave up on using Jekyll. When I found a nice base theme and used that to start with, I had a great time because everything started to fall into place. When you have some reference code that you can learn from, it's a lot easier to build your website. If you want to use the base theme I used to start, you can find it on GitHub, it's called Pixyll.
Initially, I used the template as it came from GitHub, without any modifications. I wrote my content and pushed it to master. Now there is a helpful thing about Jekyll, that I'm not sure the other static site generators are capable of and that is that GitHub can automatically build and publish websites built with Jekyll. So all I had to do was build the dynamic aspect of the website and push it to master. GitHub takes care of building and publishing.
After having pushed several updates with new content to GitHub, I wanted to customize the styles and templates of my website. Because you're working with dynamic templates, you can make any changes to the layout files and have them reflect the pages instantly. You can add and remove any HTML code you want. You can even change the global variables and use them in your templates to make them dynamic. For example, you can add a title variable in your configuration file and then add them in your templates using the Twig templating engine:
{{ site.title }}
You can use the tags above if you have something like this in your _config.yaml file:
title: This is a title
If you ever get stuck or have a question, the Jekyll documentation has the answers you're looking for. It's not often that the documentation is so complete that it answers all the questions you might have about working with the software. It's some of the best documentation I have ever seen and I strive to write my documentation as well as the Jekyll team has.
Setting up a repository to serve as a static website is very simple. It's only a few steps and you can follow the steps on the official guide for setting up GitHub Pages or follow the steps below:
You can now view your website at https://your_username.github.io/repository_name.
If you want to use a custom domain, like https://your_domain.com, then you should look Configuring a custom domain on GitHub Pages. These pages will tell you exactly what you need to do to be able to serve the static website at any domain you own.
Do you want a more in-depth tutorial? Then you're looking for How to set up and automatically deploy your website to GitHub Pages. That tutorial will take you through all steps you need to go through to deploy your websites on GitHub Pages without a hassle.
Now you have a static website running on GitHub Pages. This includes a free SSL certificate and you don't have to worry about managing servers and hosting at all. So, in the end, you have a lightning-fast website, running on the GitHub servers, which are very reliable. When you want to update the content of the website, just make your changes locally and push to master. GitHub will automatically update your website and you'll be able to see your changes reflected on your website within a minutes. So if you have a content website and don't want to worry about hosting, security and any other settings, just GitHub Pages.
]]>As some of you might know, I've been working on a content management system. I've described Why I built my own CMS in an earlier post. Then, after I wrote How to write good documentation I thought to myself: "I just wrote about this, but I'm talking the talk and not walking the walk". Let's change this and make it as easy as possible to read the documentation and make amendments to it.
After the initial realization that I have poor documentation for my project, I started looking into some ways to make the documentation more accessible to people other than me. I quickly landed on GitHub Pages for hosting the website, as this requires no effort on my side to host the website, take care of SSL certificates and some other basic stuff. I wanted to encourage myself to actually write good documentation and if I had to take care of all of those things first, it just becomes a burden and I won't want to write anything.
As you know, GitHub Pages only hosts static websites, but I wasn't ready to write plain HTML and CSS, because where's the fun in that? I remembered from back in the day that Jekyll is a static site generator and guess what? GitHub Pages supports Jekyll. This meant I found what I needed to get started.
I knew I wanted to use Jekyll, but I had no clue how to make a Jekyll project and what to do. After trying to create my own project from scratch I was ready to give up. I had no clue what was going on and this took too much effort to get something simple out of the door. However, after some browsing, I found there was such a thing as Jekyll templates. Great, a chance for me to take a shortcut. I created a very basic website and published this. Below you'll find a screenshot of what it looked like:
As you can see, it's very basic and makes heavy use of the existing template. This was a good start, but there isn't any documentation at all.
After publishing the first version of the documentation website, I kept working on a newer, less basic version. The current version of the documentation website is still quite basic but has some branding and actual documentation. The biggest challenge is to figure out what to document and what to leave out. I decided to start out with some very basic things like what the project is and for whom this project is. The next logical things to document are what system requirements need to be met and how to install the content management system in Laravel applications.
I included a page that describes some plugins I've written for the content management system because I use them for my projects (this website is one of them) and it has a lot of added benefit for me. This is not strictly documentation per se, but it does help people understand what the project is and what it's not. The project is a drop-in CMS for established projects, it's not a standalone CMS.
]]>For the past few years, I've maintained a mono repository for my day job. This repository included a complete Laravel application and a complete Angular application (first AngularJS, later Angular). This all worked well together but became more difficult to maintain as the development team changed members and responsibilities over time. In this post, I'll walk you through the old situation, through an intermediate scenario, to the current architecture. As most architecture decisions are very much bound to a specific use case and definitely doesn't work for everyone, I will clearly explain what my choices were and why I chose to do it in a certain way.
In this post I will go through these stages:
Stage 1 is the stage in which I was learning to build production-ready applications. This meant I was making the least amount of "external" connections. In my mind, an external connection was having to deal with two separate physical locations in which I was running some code. During this stage, Laravel was serving a very basic blade file which in turn booted the AngularJS application. This was quite easy, as AngularJS could very easily be used as a drop-in front-end framework. So all I needed to do was make the server serve the correct HTML to the browser and from there AngularJS took over and booted an application for the visitor.
During this stage is when I was already working with API calls, which meant that Laravel was responsible for serving the barebones HTML to boot AngularJS, but also be able to respond to API calls with the correct data. This worked very well for a very long time (2 years).
Stage 2 is where I learned a lot about JavaScript and optimizations. Since AngularJS was starting to show it's age and the application was getting larger and more difficult to manage, I made the choice to upgrade to Angular, which was version 6 at the time. I had built other applications with the new Angular framework combined with Laravel and was very impressed with how quickly the application booted. AngularJS took a good few seconds to fully load, sometimes up to 6 seconds. This was unacceptable, but I was running out of things to optimize...the application was ready for an upgrade.
During this stage, I migrated the entire AngularJS application to Angular. This took about 4 months, but this time was well worth the work. The application booted very quickly and it was much easier to manage. Since everything was TypeScript instead of JavaScript, we had fewer runtime bugs and the application was built using modules and components. This meant we could very easily chunk and lazy load modules, which made the application much more lightweight.
Everything seemed great, but yet this is only at stage 2 out of 4, so what happened? Well, the team changed members and responsibilities. Before, I was the one to manage Laravel and the Angular (and AngularJS) application. But I was trying to move more to the Laravel side of things and less towards Angular. So my colleague took over some of my tasks when it came to developing the front-end application. My mono repo, with it's complicated and non-standard building tasks was history.
Stage 3 is strange, but also a great step in the right direction. We made the move to completely cut out Angular from the Laravel repository. This brought many great advantages, but the most important one was that the usage of the Angular CLI became much easier. This meant we could start to use "ng serve" for the first time. This made developing the Angular application a breeze.
At this time, we also started to move into automated tests, which meant that both the Laravel application and the Angular application went through a CI pipeline. This on its own has brought many improvements to the quality of our work when it comes to writing reliable applications. Having two separate repositories for the two separate applications made it possible for us to use two different CI pipelines, as this wasn't possible before.
The second part of the title for this section mentions that the two applications were hosted to be backward compatible. This sounds strange, but let me explain why I did this. This was done with the very simple reason that it didn't require any new or updated code in the Laravel application. So essentially we had two different applications, that somehow needed to be merged to become one again for the production environment. As the Laravel application was dependent on the presence of the Angular application. Laravel still served both the API and the Angular Application in one. This meant that I had to write a bash script to perform a semi-automatic deployment, where the Angular application was built in my local environment, then zipped in a tar file, uploaded to the server through SSH, extracted, and finally some clean up was done. Yes...very overcomplicated, but it was perfect for making it easy to work on the Angular application.
When I initially built this workflow, I knew it was only going to be temporary, because I don't want others to ask me to deploy changes, simply because they don't understand how. That's just not worth anyone's time. The next logical step was automatic deployment, and that's what the next stage is all about.
Stage 4 is blissful. I finally made it here, after almost 4 years of hacking away at "work in progress" workflows. So what is stage 4? Well, stage 4 is where everything is done with developer satisfaction in mind. Automatic testing and automatic deployment. Stage 4 is a full-blown CI/CD pipeline in place, so anyone can deploy changes by themselves.
You might be wondering how I got to this stage. Well, let's start with Netlify. I discovered Netlify just recently, after having all of Twitter be enthusiastic about it for a very long time. I realize I'm very late to the hype train with this, but I never really had an opportunity to have a look, until recently. So just for our internal purposes I signed up for Netlify and put our Angular application on it. This was primarily used for testing and viewing deployment previews when pull requests came in on GitHub. After having done this for about 3 weeks, I thought: "Hey, why are we not using this in production?". So I got to work and a week later I was ready. The Angular application is now hosted on Netlify and any changes are automatically deployed. This means that I don't have to be bothered to deploy changes and my colleagues are empowered to deploy their changes, run A/B tests, and show their proposed changes to the rest of the team.
The Laravel application is solely responsible for processing API calls, okay and some administration pages that are made with the Laravel framework. Since I'm currently the only one making changes to the Laravel application, there is no automatic deployment strategy, but automatic testing is in place. Automatic deployment is the next logical step, but this will happen when it's really needed like it was for Angular.
So was this enormous change worth everything? 1000% yes! I've been able to empower my colleagues to continuously push changes to production without any downtime or help from colleagues. So this change has made it amazing to work on the Angular application once again. This change alone was completely worth all of the work I put into it, but there is more. The front-end website has also become faster. Netlify's post-processing has made this website perform much better compared to the old situation and the lighthouse scores prove it (it was between 7 and 45 and is now 67). It's still not the highest score, but at least now we can very easily push improvements to get this to 100%.
I loved writing this post! I'm very excited that I've been able to build all of this, through years of trial and error, to make our platform better and contribute to the developer's satisfaction when fixing bugs and building new features. Thank you for reading this far! If you have any questions or just like to say hi, you can contact me on Twitter.
]]>
The magical moment is finally there, you've written the tests and the screen says "100% coverage". You're happy, all tests pass and your code will never be bad again. But is that really what 100% test coverage means? Let's explore the topic together and I'll tell you my thoughts about "the magic 100% test coverage" milestone.
Great, you've got 100% test coverage, but what does it actually mean? 100% test coverage simply means you've written a sufficient amount of tests to cover every line of code in your application. That's it, nothing more, nothing less. If you've structured your tests correctly, this would theoretically mean you can predict what some input would do to get some output. Theoretically... It doesn't mean you've actually written a test to verify the expected output is actually returned. It could just mean you've written a test for a different part of the application and a line was executed in the process.
So now that I've outlined what I mean with 100% test coverage, let's look at some reasons why you would want to achieve the magical 100% test coverage and some reasons why it might be a huge waste of time.
There are several reasons why achieving 100% code coverage is a good idea, given that you write a test to verify specific use cases. This means that you purposefully write tests to verify certain scenarios are dealt with in the way you intend them to, not just tests that execute your code in the background. So for the next part of the blog post, you should keep in mind that the tests are written to thoroughly test your code.
While writing tests and making sure you get to that 100%, you will most likely find code that hasn't been executed by any of your previous tests. This could mean that you need to write another test to verify if a specific use case is dealt with properly by your code, or it could mean that the code is not used anymore. If the code is not used (anymore) you can remove it. If that's the case, writing tests has already had the added benefit of finding dead code and cleaning up your codebase in general.
Another scenario you might encounter is that you find code that has silently been failing up until now and you've just never noticed it before. If you write a test with a certain input and you're expecting a certain output, it can't result in any other value without a good reason. You might discover that you need to write an additional test to cover the new scenario or you've found broken code. When you find broken code, the test has already proven its value. The test is there to verify your code works and if it finds an error, it has served its purpose.
So imagine that you've gotten to the 100% coverage after fixing all errors and removing all unused code, making sure to cover most if not all scenarios your code might deal with, pretty satisfying feeling right? Well now comes one of my favorite benefits of having 100% test coverage: refactoring old code and writing new features. When you have the tests, you theoretically know what output you'll get for a certain input. This is a very powerful concept because it means that you can refactor your code in any way you can imagine and get instant feedback on the refactor. You will instantly find out if you broke the code or if everything is still working. The tests expect a certain output for the given input and as soon as this changes, it will let you know. Of course, some unit tests might break, because they rely on specific implementations of your code, but the integration tests won't break as they don't really care how you solve the problem, as long as it gets the expected result.
And the last benefit that I can think of is having a sense of security and confidence about the reliability of the code. The confidence in the system is only as great as the level of trust you put into the tests. So if you write tests, but don't really trust the result, it's probably time to write more and/or better tests. You need to be able to rely on your tests, as these are a representation of the proper functioning of your application to the outside world. If you trust the results of the tests, you'll be able to ship new features much faster. If you don't trust the results of your tests, you won't write the tests in the long run, as they are an obstacle to getting to what you need: a "working" system. I deliberately wrote that in parentheses, because there is no way to verify if the code you wrote actually works. Sure, the first time you can test it by hand, but 10 features later you won't do this anymore. Then if any new features break this script, you won't find out until someone brings it to your attention.
As I mentioned in the introduction about why it's a good idea to achieve 100% test coverage, you need to write tests with the explicit purpose of testing a specific piece of code. This is where a lot of people will, rightfully in some cases, tell you that test coverage doesn't really mean anything. In this section I break down what that means and why solely going for 100% coverage, for the sake of going for 100% coverage is a bad idea.
As I mentioned at the beginning of this post, 100% code coverage means that 100% of your lines of code have been executed while running your tests. That's great, but it also doesn't mean anything. If some code gets executed, but you don't have tests in place to verify if what is being executed actually does what it's supposed to, you are effectively tricking yourself. You will believe that just because your tests execute all lines of code it actually does what you intend it to. This doesn't have to be the case in any way. If you're only writing integration tests you will cover a lot of code, but the individual methods won't be tested.
This means that you need to write unit tests to verify if a single method returns exactly what you intend it to. If you have this in place, only then you can trust that the code coverage actually means your code is working as it should.
As I mentioned in the last paragraph, just because you executed every line of code, doesn't mean you actually verified it's working as it should. This means that there could be any number of unexpected errors hiding in plain sight. For example, you've written tests for a controller and verified that all methods on the controller work as intended. That's a great start, but you're not there yet. What if the middleware blocks the user from ever reaching the controller? Well, you haven't tested for this aspect. So your tests might return green, but your application doesn't work as intended. This is a basic example, but you get my point: multiple ways lead to a certain result and you need to verify all of these ways function as intended.
Tests can be misleading. This is especially true if you write tests after you've already written the code. You might find a method that hasn't been tested properly, so you write a test for it. Great start! But what happens if you assert a certain result is returned when this result is a bug, to begin with? An example: your input is 1 and your expected output is 3. So you write a test that verifies 1 will return 3 and it does. That's green, the code works. Yes, it seems that way, but if the method returns 1 + 1, then your assertion is incorrect, to begin with. This is a very basic example, but it's good to pause and think about what it means. This means that you've written a test that makes sure you never find the bug automatically until a customer stumbles upon it. The process of writing tests is to make sure you understand your code and you shape it to your will. The tests are your primary source of truth, so make sure you can rely on the results of your tests.
Writing tests is a great way to write better software, but it's not a silver bullet. We're human (right?) and humans make mistakes. This is why writing tests for everything is not enough, but it's a great base to build from. You need insightful error reporting and you need to deal with errors properly. You need to let the right people know when something is breaking and you need to make it very simple to fix any mistakes.
I promised you I would give my opinion on 100% test coverage, so I will. 100% test coverage for the sake of getting 100% test coverage is a huge waste of time because it has no added benefit for you, your stakeholders, or your customers. It gives you a false sense of security and will only cost you valuable time. 100% test coverage as a tool, however, is great. It forces you to think about how you structure your code, how you can write it as simple as possible and it helps you eliminate unused scripts.
So is 100% test coverage worth it? Well, that depends on the situation. Often it has a lot of benefits, but again, it's not the ultimate solution to write great software.
What are your thoughts about 100% test coverage? Have you ever actively chosen to not go after the 100% coverage? Why? Thank you for reading this far. Let me know on Twitter what you think about this topic.
]]>Writing documentation is often more important than writing code itself. Why? Well, if no one knows how to use your code, no one will use it. You need to be able to explain how your code works and why it works the way it does. This way, other developers will know how to write code in the same style you do. If you provide examples for the way you're implementing your code, others will understand the context in which to use your software.
So the question remains, how do you write good documentation? There are a few steps you need to take to get to a good collection of explainable concepts:
You can use these three points to determine WHAT to document. You might think you should document everything, but that's not necessarily true. You should document what you deem to be the most important part of your software and what others will be using most often. When those concepts are crystal clear, you can document the more hidden parts of your code, if it's needed.
You've gathered a small list of things to document, great! Now we can move onto the part where you start writing. However, there are a few things you need to keep in mind when writing documentation:
This list was compiled in a great article called The eight rules of good documentation by Adam Scott. For an in-depth explanation of each of these concepts, I'd like to point you to that article.
These rules might seem very obvious, but you'd be surprised how often these rules are not kept in mind when writing documentation. When explaining concepts, you should use a very friendly tone. You want people to read about your software and you shouldn't make them feel less of a developer for not immediately understanding your code. You should also go into detail, giving wide-ranging examples of how to implement the software, but not writing the same thing ten times.
When writing about your code and there are several ways to interact with, for example, a class, you don't have to document every single way to do this. You can provide the way you have implemented the software in your projects and let them explore the other ways to interact with the code. All you have to do is paint a picture and help the developers understand how and why you chose to write the software the way you did.
When writing documentation you should make sure that others can easily update the documentation. This has the added benefit that new features, which others have built, can be documented for you and others. This also means that the documentation is very likely to stay up to date with the usage of the code. There is nothing more frustrating about a piece of software than documentation that hasn't been updated in a while and all code examples don't resemble the implementations anymore. Stay on top of this, take the time to update the documentation. Others, but also yourself in a few months will thank you for it.
Last but not least, make sure your documentation can be found in a very obvious place. If you want your documentation to have added value, people should be able to find it and navigate through it. There are plenty of examples of great documentation where the main priority of the documentation was to quickly find and read through the documentation. The Laravel documentation is one of those. There are also terrible pieces of documentation, I won't name them, but they're often automatically generated from the code. These automatically generated documentation websites cover too much ground and do this in such a way that you might as well read through the source code because at least you'll be able to click through that. Don't do this, because this will raise more questions than it answers.
Now you've some basic guidelines to keep in mind when writing documentation. So there is only one thing left to do: Write great documentation! You'll do yourself and others a huge favor by being able to provide documentation for your software. Any new code will adhere to this documentation and it'll free you up to write code, instead of just fixing code others wrote. If you have any additions or questions, you can contact me on Twitter at any time.
]]>Efficient and fast CI pipelines are great because you quickly know if your application behaves the way it does, by running automated tests. Having pipelines that take a long time to complete have the disadvantage that people might start to ignore the status checks if something needs to be fixed quickly. This is something you want to avoid, so I've compiled a way to run PHPUnit tests in a very simple environment without having to install any composer dependencies.
When your application is dependent on a lot of different services, you want to mock most of these or run them in RAM memory. For database tests for example, you often want to use an in-memory SQLite database. However, in my case this was impossible as the application was dependent on certain geolocation functions (ST_AsText, ST_MPolyFromText, ST_IsValid, etc.). These are not available for SQLite, so instead a MySQL server with a mounted tmpfs volume storage device will have to do. This simply means that we're using a MySQL server, which is tricked into using RAM memory as a storage device. This results into lightning fast reading and writing operations. You get the functionality of a MySQL server with the performance of a in-memory SQLite database.
You can very easily do this in docker. I'm using a docker-compose.yml file, but you can also do this in the terminal with the docker commands. This is how I've done it in docker-compose:
version: "2.3"
services:
mysql:
image: mysql:5.7
tmpfs: /var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: root_password
MYSQL_DATABASE: testing_database
MYSQL_USER: testing_user
MYSQL_PASSWORD: testing_password
networks:
- front
networks:
front:
The special things in this configuration are the tmpfs key and the networks. The tmpfs key tells the docker container to place the /var/lib/mysql folder into RAM memory. This is the folder than contains all the data that's stored in the databases. The networks key is important, because we'll come back to that in a minute. For now, you just have to create a new network and make sure the mysql container is part of that network. This network has the name "front". This is important for later.
Since it's not really possible to cache installed programs and extensions in CircleCI (installed through apt-get install), the only other way is to create an docker image that contains all required programs and extensions. When you build this docker image, the layers will be cached and you'll be able to run the docker image in mere seconds, even though it includes all software you need to build your application. If you install these applications on the virtual machine within CircleCI, this could take up to 5 minutes and that's just preparing the testing environment. Using a provisioned docker image, you can download it in 10 seconds and run commands 3 seconds later.
Since I'm testing a Laravel application, I can use the following Dockerfile to install any and all composer dependencies and run any and all artisan commands:
FROM debian:9.7-slim
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update \
&& apt-get install -y --no-install-recommends apt-transport-https lsb-release \
ca-certificates wget build-essential \
&& wget -O /etc/apt/trusted.gpg.d/php.gpg https://packages.sury.org/php/apt.gpg \
&& sh -c 'echo "deb https://packages.sury.org/php/ $(lsb_release -sc) main" > /etc/apt/sources.list.d/php.list' \
&& apt-get update \
&& apt-get install -y --no-install-recommends php7.3 php7.3-fpm php7.3-mysql \
mcrypt php7.3-gd curl php7.3-curl php7.3-mbstring php7.3-xml php7.3-soap \
php7.3-zip php-zmq php7.3-bcmath php-pcov unzip \
&& curl -s https://getcomposer.org/installer | php \
&& mv composer.phar /usr/bin/composer \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
VOLUME /var/app
WORKDIR /var/app
EXPOSE 9000
As you can see here, I'm using the Debian slim image instead of Ubuntu 18.04 (or any other version). This is done with the sole reason that the Debian image is 50 - 60% smaller in size, which means it'll download much faster. Instead of downloading 900mb, CircleCI will "only" have to download 300mb. This Dockerfile installs all PHP dependencies I need and the latest installation of Composer.
This Dockerfile is also available for download, you can pull it by running:
docker pull roelofjanelsinga/test-suite
When setting up the application in CircleCI, we'll need to install all Composer dependencies and generate an APP_KEY before running any tests. We can very easily install the composer dependencies without installing composer on the virtual machine, because we have the docker image. Run the following command to install all composer dependencies:
docker run --rm \
-u $(id -u):$(id -g) \
-v `pwd`:`pwd` -w `pwd` \
--network=$(docker network ls | grep front | awk '{print $2}') \
roelofjanelsinga/test-suite \
composer install
Let's go through this command line by line:
docker run --rm: This will run a command and remove itself after the command has finished.
-u $(id -u):$(id -g): This will run the container with the same user as your current user (ex: 1000:1000). This avoids any incorrect file permissions.
-v `pwd`:`pwd` -w `pwd`: These are backticks, not quotation marks! This will mount the current directory in the same location in the docker container and set the working directory to that folder. This means that all commands will be run in that folder.
--network=$(docker network ls | grep front | awk '{print $2}'): This is where the networks key from earlier comes in. The command: $(docker network ls | grep front | awk '{print $2}') returns the name of the network you created in the docker-compose.yml file. If you haven't named your network "front" in the earlier steps, be sure the replace it in this command. Normally docker-compose will name your network something along the lines of: prefix_front. However, this is not a guarantee, so by running docker network ls we get the actual name. This part of the command attaches the docker images to the network. This will allow you to connect to the mysql server through the docker network.
roelofjanelsinga/test-suite: This is where you specify the container to run. I'm simply using the container we've created earlier. CircleCI won't have this container installed locally and will download it. This is why I'm using Debian slim instead of Ubuntu, just to make this process run more quickly.
composer install: This is the command we're running. This will install all composer dependencies using the software we've install inside the docker container. Since we mounted the current directory into the docker container and we're running the command in that directory, this will write all composer files to the storage layer in the virtual machine.
Installing composer dependencies takes a while depending on how many dependencies you have installed. To avoid doing this every time, we will cache these dependencies:
- save_cache:
key: my-project-composer-dep-{{ checksum "composer.lock" }}
paths:
- ~/my-project/vendor
This will save the installed dependencies and will only restore it when you make any changes to the list of composer dependencies you have. When you have cache available, you can restore it as well, let's add this in a step before we're installing the composer dependencies. This means composer will see the dependencies are already installed and skip the installation.
- restore_cache:
keys:
- my-project-composer-dep-{{ checksum "composer.lock" }}
Now that we've discussed everything that's related to using Docker to improve your CI pipeline performance, I'll show you the full configuration and how you could use it in your own projects:
version: 2
jobs:
Test-PHP:
machine:
image: ubuntu-1604:201903-01
working_directory: ~/my-project
steps:
- checkout
- restore_cache:
keys:
- my-project-composer-dep-{{ checksum "composer.lock" }}
- run:
name: Starting docker-compose services
command: |
echo "Starting docker-compose"
docker-compose -f docker-compose-ci.yml up -d
- run:
name: Install Composer dependencies
command: |
mv .env.testing.example .env.testing
docker run --rm \
-u $(id -u):$(id -g) \
-v `pwd`:`pwd` -w `pwd` \
--network=$(docker network ls | grep front | awk '{print $2}') \
roelofjanelsinga/test-suite \
composer install
docker run --rm \
-u $(id -u):$(id -g) \
-v `pwd`:`pwd` -w `pwd` \
--network=$(docker network ls | grep front | awk '{print $2}') \
roelofjanelsinga/test-suite \
php artisan key:generate
- save_cache:
key: my-project-composer-dep-{{ checksum "composer.lock" }}
paths:
- ~/my-project/vendor
- run:
name: Run PHPUnit tests
command: |
docker run --rm \
-u $(id -u):$(id -g) \
-v `pwd`:`pwd` -w `pwd` \
--network=$(docker network ls | grep front | awk '{print $2}') \
roelofjanelsinga/test-suite \
./vendor/bin/phpunit
As you can see, all commands to interact with the applications are run through the docker container. We're installing composer dependencies, generating a application key, and running PHPUnit tests in the docker image. This makes it so the virtual machine (ubuntu) doesn't need to install anything, because docker is already preinstalled. They only thing the virtual machine does is pulling the latest changes from a git repository. Everything else is managed through the docker container.
I hope this helps you improve the efficiency of your CI pipelines. If you have any questions or suggestions to make this configuration better, please let me know on Twitter.
]]>If you're just starting out, you often want to come up with complex solutions to problems. Sometimes you do this to learn a new skill or to show off your problem-solving skills. Complex solutions are often perceived as knowing a lot and being good at something. This is sometimes a valid assumption, but in a large majority of the time a simple solution is exactly what you want to write and this often requires a lot of skill, let me explain why.
You can look at any problem and come up with some kind of solution, anyone can do this. What separates you from the rest is when you can do this in the simplest way possible. You might wonder, but why is this important? Well if everyone understands the code you've written, it's easy to maintain and won't cause a lot of confusion. Simple code is likely to survive multiple rounds of refactoring. Seeing the simplest solution is a skill you need to practice because it's the result of filtering many solutions in your head and coming up with the best fitting one.
If you make a mistake and pick a difficult solution, it might bite you later on in the process. This definitely doesn't mean, only make the right choices. In fact, it's the opposite, make mistakes and a lot of them. You learn from mistakes and you'll never make the mistake again after you've figured out what went wrong and why. Often times, you might write an overcomplicated solution. This has the effect that the efficiency of your script might be a lower priority and it loses you X amount of time every time it runs. This solution needs to be refactored a few times by the one who originally wrote it to serve as a great learning opportunity. Every time the script is refactored, you will learn something new and gradually you'll figure out how to write complex scripts in a very simple and maintainable way.
Senior engineers are often more concerned with the architecture of the application overall. This often means they're great at separating different concerns into different scripts. For example, a script might be used in slightly different ways in 3-4 locations. Instead of constantly adding to this number with yet another implementation of the same script, perhaps it's better to standardize how to use the script, extract it into a class and use the class instead. If you really need a different use for it, create a class that extends the base class or write an adapter. The point is, senior engineers have done this countless times, so they're very likely to recognize a scenario where a script might be extracted into a class and will do so from the start. Engineers that don't yet have the experience of extracting a script like that for many times might just copy/paste the same script, adjust it in a few places and be satisfied with the result. Not knowing that same script might haunt them in the future. But when it starts to haunt you, you'll have a great learning experiment and you'll make the same mistake less often.
In short, being able to go through all possible solutions in your head and picking the simplest, most maintainable solution is a skill that you need to practice. It's difficult and you will make mistakes, but making mistakes is necessary. You need to make mistakes to figure out what works and what doesn't work. You can always ask more experienced engineers, but until you really understand why something works the way it works, you won't really remember what you did last time you occurred a certain scenario. So fail often and learn from your mistakes quickly.
If you have anything you'd like to add to this post, please contact me on Twitter. I'd love to learn from your experiences and would like to pass your knowledge on to others.
]]>In September I've updated my portfolio in a few places. My main motivation behind this was to display a more complete selection of what I have done and what I'm currently working on. As I've been quite active writing blog posts lately, I've made sure to include those on the homepage as well.
These are the things I've changed on my portfolio:
As you can see, the changes are all about giving an excerpt about what I've been working on and grouping these things. The dark blocks, for example, are there to split the content into smaller sections instead of making one long content block that sees no end. This was a result of me not being the best at designing and wanting to bring the attention of the visitor back to the content.
I've added my CV quite prominently on the website, this is done intentionally because this hopefully cuts down on the number of times I have to put together a CV. Putting together a CV is not something I enjoy, so making it into a technical challenge I've actually enjoyed the process of going through it. The benefit now is that I never have to make one again. The only thing left to do is to make it possible to download the CV as a PDF and then that section is done.
My tech stack is a page that displays what I'm comfortable using in production projects and what I'd love to learn more about. In the future, I might turn this into a page that displays what projects I'm currently working on and which tech stacks I'm using for the project. This way I can track my progress with certain technologies and the ways I'm implementing these.
Before, all pages (except for the blog posts) were Blade templates that needed to be edited in a code editor. This is a great way to build websites, but not a great way to manage content. Since my Flat File CMS was already integrated into the website for the blog posts, I decided to also allow it to manage some of the pages. This means I can now manage the content of "My tech stack" and "The techniques I used to build this website" from my phone.
My blog posts are the biggest section of my portfolio website. Most software behind the screens is set up to deal with automatic publishing, editing the blog posts and automating image manipulation. So you could say that I'm running my portfolio website on my blog and not the other way around. Because of this, I found it was only fitting that I display blog posts on different parts of my website as well as the sections on /articles. Adding more than 2 seemed a bit excessive though, so, for now, it displays the two most recent posts.
As you can see, there aren't a lot of changes, but there is potential for much more. The fact that it's now also possible for me to manage more content from my phone, or any computer on which I don't have access to a terminal and my web server, is amazing. I loved making my life easier with little optimizations like this. I've written much more about this in my blog post "Why I built my own CMS", which is essentially all about making my life easier.
Do you have any tips on other things I should display on my portfolio website? Let me know on Twitter and I'll try to implement your suggestions.
]]>Recently I worked on a problem that didn't seem solvable. It was a problem that a bit difficult to explain, but let me try. A process exists which is calculating a price, including discounts, seasonal pricing, blocked dates, etc. This means it's quite a long process because there are hundreds of variables that could impact the final price. When you request a single price, for a given date including a stay duration (price per day and pricing per week, etc.) it'll give you an answer quite quickly, usually within 120ms. This is pretty good for a single calculation, but what if you have to do thousands? That's when it becomes problematic.
When you're calculating something 1000's of times, every millisecond adds up to seconds. Let's take the 120ms as a normal calculation speed for a single calculation and see what happens when we're calculating this 5000 times.
5000 * 120ms = 600.000 ms = 600 seconds = 10 minutes
As you can see, that's quite a long time to do 5000 calculations, so let's see what happens when we improve the calculation speed by just 1 millisecond:
5000 * 119ms = 595.000 ms = 595 seconds = 9 minutes and 55 seconds
As you can see, a small improvement of 1 ms already has an impact of 5 seconds. Of course, in the big picture, what is 5 seconds on 10 minutes? Not as much as you'd like of course.
So these numbers are nice, but what does it mean? Well, I'm working on a system that continuously indexes large amounts of documents into a search engine running on Apache Solr. The indexing process goes well, the search engine works well, but the calculation stage, when creating these documents is the real bottleneck. As the variables to calculate the price change often, prices have to be calculated for each day, for all available stay durations. You might be wondering what this looks like, let me try to visualize this with some data:
Imagine you have available dates on a random date like 2019-09-14 and the first blocked date is at 2019-09-21, you can still make a booking for 1 day, all the way up to 1 week (check-in and check-out can happen on the same day), but you can't make a booking for 2 weeks, as the second week is already blocked off. This means that we need to calculate prices for 2019-09-14 for the following stay durations: 1, 2, 3, 4, 5, 6, 7. This is 7 calculations for a single day. For the 2019-09-15 we need to calculate prices for the following stay durations: 1, 2, 3, 4, 5, 6. As you can see, we won't need to calculate the price for 7 days, because you won't be able to make a booking for 7 days, as the last day would be blocked by another booking.
We can't simply use the price for 1 day and multiply this by 7 to get the week price, because sometimes a discount only applies to a booking that's 1 week or more, which means that you'd display a price that's much too high for a week if you just multiplied the day price. Long story short, we need to calculate the price for each stay duration separately to make sure it's accurate.
There are a few things I've already tried, including:
Deferring calculations to asynchronous processes was an absolute disaster, because this did several things that caused huge problems in other areas, including flooding the task queue with tasks (30k - 40k tasks that blocked other tasks for longer periods of times) and writing to the search engine far too often. Writing to the search engine often, in very small batches takes a long time because it's an HTTP request and the search index needs to be rebuilt often. This needs to be batched into larger chunks to achieve better performance.
Making assumptions about the consistency of the pricing and caching pricing works quite well, but you can't guarantee the data indexed is correct. The way I implemented this was as follows: The price throughout the week rarely changes, so when I'm calculating a price for 2019-09-14, I cache this and apply this to the date range 2019-09-14 until 2019-09-20. This has the benefit that you have to do 7 times fewer calculations, but it also allows for possible errors in pricing. This would result in a total calculation time of:
( 5000 / 7 ) * 120ms = 85.800 ms = 85.8 seconds = 1 minute and 25.8 seconds
This is much better but has its trade-offs.
For now, the problem has been "solved", but this is not a good permanent solution. Ideally, this process wouldn't take longer than 5 seconds, but I have no solution that would help achieve this as of yet. If you have any ideas on how to improve this process, please let me know. It's very difficult to shave off a few milliseconds for the single calculation, but this might be a possible solution. Of course, eliminating unnecessary calculations is even better. Programming isn't all about finding amazing solutions, you also really struggle to deal with something all the time. This is why I've written this post. I'm trying to highlight that I'm struggling with some tasks all the time.
If you'd like to get in contact with me about this, possibly with some advice for me, you can contact me on Twitter or send me an e-mail at roelofjanelsinga@gmail.com.
]]>Making cross-domain XHR requests can be a pain when building a web application as a single page application, fully written in JavaScript. Your browser will send an additional request to your server, a so called Preflight request. This request won't have the normal request type you're used to (GET, POST, PUT, DELETE), but it'll have type OPTIONS. But what does it mean and how do you solve it?
A preflight request is a simple request your browser automatically sends to the server when you're requesting data through an AJAX call in JavaScript when you're not requesting data from the same domain name. This also applies when you request data on localhost but on a server running on a different port, example:
# No preflight request will be sent here, the domains are the same (localhost:8000)
http://localhost:8000 -> GET http://localhost:8000/api/resources
# A preflight request will be sent here, the domains are the different (localhost:4200, localhost:8000)
http://localhost:4200 -> GET http://localhost:8000/api/resources
When the domain differs, the browser will send an OPTIONS request before it sends the GET request. This OPTIONS request is simply there for the browser to ask the server if it can request this data. So if the server response with some explanatory headers and a 200 OK response, the browser will send the GET request and your application will have the data it needs.
Solving this situation is quite simple: you just have to add headers to your response indicating what the browser is allowed to request and what not. Below will follow a few examples that you can copy/paste, be mindful how much you want to allow the browser to do though.
This section contains the settings you should use for Nginx, Apache will be further down. For this to work on Nginx, we'll make use of the add_header directive: Documentation can be found here
Allow all requests
# Allow all domains to request data
add_header Access-Control-Allow-Origin *;
# Allow all request methods (POST, GET, OPTIONS, PUT, PATCH, DELETE, HEAD)
add_header Access-Control-Allow-Methods *;
# Allow all request headers sent from the client
add_header Access-Control-Allow-Headers *;
# Cache all of these permissions for 86400 seconds (1 day)
add_header Access-Control-Max-Age 86400;
Allow all requests from certain domains
# Allow http://localhost:4200 to request data
add_header Access-Control-Allow-Origin http://localhost:4200;
add_header Access-Control-Allow-Methods *;
add_header Access-Control-Allow-Headers *;
add_header Access-Control-Max-Age 86400;
Allow certain request types to be made
add_header Access-Control-Allow-Origin *;
# Allow GET and HEAD requests to be made
add_header Access-Control-Allow-Methods GET, HEAD;
add_header Access-Control-Allow-Headers *;
add_header Access-Control-Max-Age 86400;
Allow certain headers to be sent
add_header Access-Control-Allow-Origin *;
add_header Access-Control-Allow-Methods *;
# Allow only the Authorization and Content-Type headers to be sent
add_header Access-Control-Allow-Headers Authorization, Content-Type;
add_header Access-Control-Max-Age 86400;
The same headers used in the section for Nginx will work in this section, you'll just have to implement it slightly differently. You can place them in a .htaccess file or straight into the Apache site configuration or global configuration.
<IfModule mod_headers.c>
Header add Access-Control-Allow-Origin *
Header add Access-Control-Allow-Methods *
Header add Access-Control-Allow-Headers *
Header add Access-Control-Max-Age *
</IfModule>
As you can see, you will need to enable the headers module for Apache if this hasn't been done already.
I hope this post helped solve the problem, I know I got stuck with this for a few hours before I found this seemingly simple solution. If you have any other questions or comments, you can send them to me on Twitter.
]]>
Last month I've created a blog post mentioning my open-source contributions for august 2019. I found this was a great way to keep track of everything I've learned that month, so I've decided to do the same for September. This month I've been working on 4 packages, 3 of which are completed and could be used by anyone, while one of them is still a proof of concept and is in very early stages.
The packages below are completed and can be used in any project:
The two packages with the asterisk (*) are forked because the original repository seemed inactive and my proposed changes were required for progression in my projects. This was the first time I forked inactive packages and published them, with my proposed changes, under my own namespace. It was a very interesting experience because I was able to replace the following configuration:
{
"require": {
"ageras/laravel-onesky": "dev-master#77e2de4a78bf2172df4129045c40350582aeabdb"
},
"repositories":[
{
"type": "vcs",
"url": "https://github.com/roelofjan-elsinga/laravel-onesky"
}
]
}
with this:
{
"require": {
"roelofjan-elsinga/laravel-onesky": "^1.0"
}
}
That really cleans up the composer.json file for the project.
The package below is still in the very early stages of development and shouldn't/can't be used for any project yet. The reason it's already available on GitHub is that I'm trying to formalize the way I'll be generating forms from JSON data. So if you have any thoughts on this, I'd love to hear it.
Have you contributed to any open-source projects in September? I'd love to hear what you've been working on (and of course see the code), so let me know on Twitter.
]]>I've always been against building my own content management system (CMS). When anyone asked me to build something like it, I just went with off-the-shelf solutions like WordPress. This was always enough for me because I wasn't building the websites for myself. I was always convinced that I could create my content more easily by just creating HTML files and serving that as content. Setting up a CMS only cost me time and effort to set it up, connect it to a database, and update it regularly. So why did I end up creating my own CMS anyway? There are two reasons for this:
But you can do this with any other CMS. Yes, you're right. I have nothing to say against that because you're absolutely right in saying that. However, for me, the reasons to build my own CMS went slightly deeper than: "I can do this better". I wanted to learn to solve problems when building an application in a limited environment.
In the beginning, it wasn't even a CMS. It was simply my portfolio website, running on a Laravel application, with a database to persist the data. I hadn't updated my portfolio website in two years and my blog was running on a subdomain. I gave my website a makeover and wanted to do the same for my blog, to make it blend in with my website. This was a struggle, so I decided to read the data for the blog into the laravel application and serve it from my portfolio instead of my dedicated blog.
This worked really well...until I pulled my website from GitHub to make some adjustments. Now I didn't have any blog posts, portfolio items, or any other content. All of this content was saved in a remote database, protected by a firewall. This meant I had to download two different databases and get this to work on my local machine. I couldn't be bothered to do this, because why would I go through all the trouble just for my portfolio website? Instead, I copied all of my content into HTML files and served them from the filesystem. This worked well and it was fast. All content was available through version control and it didn't matter on which system I was working on my website, the content was always right there.
So why did I make this into a separate CMS module? Well, this is where the story gets interesting. At this time, I started a second blog for Plant care for Beginners. I was convinced of the way I could utilize HTML and Markdown files to serve static content and just edit the content of my posts through my code editor. This helped me decide that I wanted to copy and paste my portfolio website including the blogging section for this new website. And this is literally what I did, you might still be able to find references from this new blog to my old website. I copy/pasted my portfolio, removed all stuff I didn't need and got started on writing content for the plant website. But...I found a bug. When I fixed it, I thought: "Well now I have to fix this bug in 2 places, that's annoying". This led me to extract the CMS into a Composer package. This process was completed in a few days of slowly migrating parts of the websites into the package. In the end, I had a fully headless CMS that was managing all content for both websites: my personal blog and the plant blog.
This is the moment where I thought: "You know what, I want to be able to edit my content on my phone as well!". At that point, the only way to edit content was to change HTML and Markdown files on my laptop through code editors. This worked well, but what if I had the inspiration to write but wasn't close to a computer? I could edit the post on GitHub in their file editor, but every change would need a commit and I only wrote a few sentences at a time. This would add up to a lot of commits for a single post, not ideal. Initially, I started writing my posts on Google Drive. This worked really well for the longest time. The reason I got tired of that was the fact that after finishing the blog post, I had to copy/paste it to Markdown files, convert the WYSIWYG (What you see is what you get) content to Markdown and then commit and push the changes. I could write the content on my phone, but I couldn't publish it from my phone.
What I needed was a way to be able to edit my content directly in the browser and then publish it to the world from my phone, so I got to work. I created a new package, called roelofjan-elsinga/flat-file-cms-gui. This would simply be a Graphical User Interface (GUI) that utilized my headless CMS to allow me to edit all my content in the browser. I kept adding features, like being able to choose from an HTML or Markdown editor. This helped me to support some of my earlier posts that were all written in HTML files. Since all of my newer posts are written in Markdown, I added a markdown editor in the GUI, which allowed me to create and edit posts from anywhere. The headless CMS can parse files and return the content as HTML to allow my blog to display them to readers, but it can also just return the raw data, so I can edit the content in an HTML or Markdown editor.
As you might have noticed, my posts all have a featured image at the top of the page, these images are also displayed in the overview of all blog posts. The images in the overview are actually a thumbnail with a maximum width of 300 pixels. When I was working on blog posts through my code editor, I had to manually resize images to create featured images that are 1200 pixels wide and thumbnails of those images that are 300 pixels wide. This got old really quickly and when the GUI of my CMS was ready, I built a service that could do this for me automatically. All I had to do was upload an image and tell the system if I wanted a thumbnail for that image. After it uploaded, I could copy the link and place it in my markdown files. The system would automatically display the correct thumbnail in the overviews. No more tedious work that the application could do for me automatically, awesome!
So in the end, building my own CMS was just a coincidence. A very happy coincidence might I add. At this moment in time, I've got 3 websites running on the CMS and any bugs I find in one system can be fixed in all of them at the same time. This has really helped me to be much more productive. As an added benefit, I've tried to make it as easy as possible to make extensions to the CMS, so website specific features can utilize the headless CMS in any way they need to.
If you're interested in contributing to the CMS, please don't hesitate to do so. You can find the components on Github:
I'm looking for contributions in the areas of:
If you have any other feedback or want to get in contact with me, you can reach me on Twitter.
]]>For the past month, I've been using a VPN for all of my internet usage, including my work laptop and mobile phone. It's been a fascinating experiment and here are some reasons why I think you should give it a try:
When moving between the USA and The Netherlands, I missed content from the other country. While in The Netherlands, I couldn't watch some shows on Netflix I could watch in the USA. While in the USA, I couldn't access some Dutch music on Spotify. I missed the content I was able to access just a few days earlier, only because I went to another country. This was unacceptable to me because it's still me and it's still the same devices. I want to be able to access any and all content, no matter where I am. A VPN allows me to do this. It simply lets me select a country I want to pretend I'm from and I can browse the internet as if I'm actually in that country. This allows me to bypass certain checks to get straight to the content. The internet should be an open place. Transparency is a good thing (most of the time) and being able to bypass country checks makes the internet a more open space.
When using a VPN, a good VPN, it will hide your actual IP address to any server you're connecting to at all times. This has the main benefit that your browser behavior can't be tracked to you personally. You're anonymous until you decide not to be, by logging into an application for example. Anonymity is good for a few reasons, the most well-known is to make it nearly impossible for advertisers to track you. Another important reason is the fact that your IP address is hidden, this means it's difficult for hackers to track your movements and finding out where you're located.
Going hand in hand with anonymity is eluding the advertisers. When advertisers can't track your movements, they can't use your data to send you targeted ads. This doesn't mean you don't get ads, but it means that you get ads that don't apply to your browsing behavior. This is cool to me because it shows they really can't track me. I've never liked the fact that people use an aggregate of data to make assumptions about what you like. It's very ironic to me that it's part of my job, but that's more related to on-site tracking. Are you ready for a paragraph full of "radical" ideas? If so, read the next paragraph, if not, just skip it.
When you're tracking users across multiple websites, feeding the data warehouses, and using this to find out who your users are and what they like, you have a lot of power in your hands, which could be used for evil. What I push for instead is tracking on-site behavior only and allow people to opt-in for this, not opt-out. This way you can serve your customers better for what they came for. This is one of the reasons the GDPR laws in the European Union are great. Give the power of data back to the people that provide the data. When you're in another geographic area, you may not have these protections, which is why a VPN makes perfect sense. If you can't control if you're sending your data to websites, make sure the data they collect from you is useless, because they can't track it back to you.
If you think simply hiding behind a VPN isn't enough to protect your devices, you can find a VPN that proxies your data through 2 or more servers before reaching its destination. This adds many layers between you and those you want to keep out. If this is still not enough, you can choose a VPN which allows you to route your traffic through onion networks. This will make you impossible to track but is also slower. But you get the point, there are a lot of options to make yourself anonymous and you can choose how far you want to go with this.
Before I was using a VPN, I used to route my internet access through the Tor network while abroad. Different internet laws could have the effect that you're doing something completely legal in one country but is illegal in another (for example, downloading through torrents). To avoid this altogether, I made sure I was hidden. When I went to the USA, there were rumors that the government was working on a system where they could tap into anyone's internet usage. This felt like a huge privacy breach to me and I wasn't comfortable with this. I have nothing to hide, but it doesn't feel right that somebody is spying on you, just because they can. When using a VPN it's impossible to "tap into" your data, since any and all data exchanges with the internet are encrypted. This means only the VPN provider knows who you are and theoretically what you do, but from that point on, no one else does.
A VPN is great, you're anonymous and secure. It's possible to access any and all location-based content, so you won't have a problem when hitting that "Unavailable in your area" message because you can pretend you're from another area and try again. You'll be able to use more of the internet and hide at the same time. Advertisers won't be able to use your browsing behavior to be able to send your advertisements. So if you get concerned some companies seem to be following you with ads, you should use a VPN and you'll instantly see them disappear.
Do you use a VPN? Which one are you using? Let's discuss them on Twitter!
]]>
Writing blogs can be a daunting task if you're not someone that writes a lot. I used to be this person, but I learned to enjoy writing by regularly writing. The act of writing helped me improve my skills and after a while, I started to enjoy it more and more. Now, almost 3 years after I've written my first blog post, I can look back at old posts and see the progress I've made since then. These are all motivating factors to keep writing blog posts. But what is my process? How do I write these posts? I've come up with a few simple steps anyone could follow:
Finding a topic is usually the most difficult step out of the 6 steps. A lot of people will self-edit before they've even started to write, including picking a topic to write about. Thoughts like "Why would anyone want to read that?" or "There are already 1000's of posts about this" are very common, but shouldn't deter you from picking a topic. The only two questions you should ask yourself about a topic are these:
Do you see the silver lining here? Picking a topic is all about what YOU think, not what anyone else thinks. They're not the one writing the post, you are. If you enjoy writing about a certain topic, go for it! The second question is also important. I deliberately didn't say "Do I have something NEW to say about the topic?", because it's simply not the most important thing here. There will be others that could benefit from learning about your perspective on the topic. Sure, there could be 1000's of other posts about the subject on the internet, but if you can share your perspective on it, no matter how specific to your situation, you might be able to help somehow in some way.
Let your voice be heard. Even if nobody else cares, you still have a blog post that might help yourself out in the future. Sometimes it's even useful to write a blog post when you're struggling to find an answer to a problem. Having to rationalize and think about the problem from multiple perspectives often solves it.
When you've found a topic you want to write about, it's a good idea to come up with a list of possible perspectives you wish to explore in your post. These perspectives can highlight the different aspects of your topic and they help the reader understand your views more easily. You can start with a very basic bullet point list. Just write down everything that supports your point. A little spoiler: that's how I got the steps for this blog posts. If I were to share screenshots from the very beginning of this post, you'd see something like this:
As you can see from those first words I'm just coming up with a few things I might be able to use to explain what my process is. This includes some comments and hints for me to implement in the final version of the story. Once you have a few different perspectives to highlight your topic, you can move onto the next step, which happens to be my favorite step.
My favorite step is just putting words on the screen. If you don't know where to start, just write down a very controversial idea about your topic. Something that has helped me a lot in the past is trying to make fun of the topic. This is a great starter because you're motivating your brain to come up with some interesting facts to use. The main goal of this step is to get words on the page and to get into a creative workflow. Once you start to write and you get into it, the words usually come to you naturally. This makes writing a lot easier because it prevents things like writer's block.
You need to record all of your thoughts into words. Don't focus on making things sound and flow nicely. Don't even worry about making sense or using grammar rules correctly. Solely focus on transforming thoughts into words. This is the part of the process where you will likely write way too many words for your post. This is not a problem and is exactly what you want, because in the next few steps you'll be revising your text 2, 3, maybe even 4 times and your story will be shaped from the first rough drafts.
Most of my posts grow to about 1500-2000 words, but after the revisions, this shrinks to 750-900 words. Usually, when I'm done, I only have to cut text and reshape a few sentences, it's quite rare if I need to write additional content for the story to make sense. But if you do feel you're not getting the story you want, you can always skip to another section of your story and work on that instead. Writing is rarely a linear process and you often jump from one section to the next.
When you've written your heart out in step 3, you've probably written multiple stories in one post. This is only natural when you're not self-editing and this is fine. During this step, you're going to tie knots and create a single coherent story out of everything. You're taking your reader by the hand and you're helping them to get through your story by providing the path of least resistance. You need to remind them what you were talking about earlier and reference those points to tie some knots and make the story make sense for your readers.
If you've created five different stories in your final version, your readers might get confused. You can have five stories, but somehow these stories need to be connected. You need to help the reader to find their path in your story, ideally the path you intended them to take. If they can't find a path through your story they'll feel like they've missed some information and they'll struggle to get through it all. So when you can tie new sections to something the reader already knows, in this case, some points you made earlier, you'll pull them back on your storyline and they'll be likely to understand what you mean and where you're going to go with your story.
To test if your story is coherent, you might want to use a screenreader. Listening to your post helps you identify parts that seem to be in the wrong place. You can use this to tie knots or move sections around to help guide the reader. After all of these revisions, you need to make sure you haven't made any grammatical errors. Don't worry, it's quite simple.
Revising your grammar and spelling has become much easier with the internet. There are a few great applications to help you with this: Grammarly and Hemmingway App. These applications check your spelling and tone, let you know if the sentences you use are easy to understand, if they're activating or maybe too passive, and if you've made any spelling mistakes. It's not a flawless process, but it does find the majority of your mistakes and helps you fix them.
But why is this important? Well, when you made it this far into the process, you've written a coherent story. It would be a shame if your well-crafted story was published with grammatical errors. Grammatical mistakes are a very low bar that invites people to abandon your story. So don't give them the easy way out and do some of the hard work by cleaning up your sentences.
Of course, you can always ask other people if they're willing to proofread your stories. They might spot any difficult to understand sections or any grammatical errors you hadn't spotted before. A proofreader will almost always make your stories better because fresh eyes can give you a whole new perspective to your story.
You've come to the last step of the process. It's time to find some visual aids to support your story. These are usually some photos that help clarify your story and help the reader to visualize what you're talking about. This sounds simple, but it can be quite difficult sometimes. It's a step I'm still struggling with every time I write a post. I use the main picture to support the title of the post, but also to catch people's attention.
I'm not an illustrator nor do I know any of them, so finding pre-made pictures that support my post is difficult. Find some resources with copyright-free images, like unsplash.com, to accompany your story with some great visuals. If you're very serious about the presentation of your content, you might consider using services like Fiverr to request illustrators to draw you custom graphics. Anything will do, just make sure it supports your story and doesn't distract the readers.
I hope you enjoyed reading this post and you've gotten a better idea of my writing process. I put a lot of effort into it, which is why this post is almost twice the amount of words it usually is. If you have any questions, I'd be more than happy to answer them. Just contact me on Twitter.
]]>
I'm trying to become a more skilled communicator to my peers and non-technical people. Being a good communicator is vital to work well in a team. By writing more often I'm hoping to improve my abilities to transform thoughts into comprehensive sentences. By writing for different audiences, I'm attempting to figure out what kind of word choices help to communicate my thoughts in more effective ways.
There are several ways I'm attempting to communicate my ideas, technical solutions, and progress. These are as follows:
As you might have noticed, I've published a few blog posts in the past few weeks. These are mostly used to help me track my progress over time. However, like code, ideas and solutions become vague over time. By learning to improve my writing I hope to communicate my ideas more clearly, so I can keep understanding what I was talking about when I originally wrote the text.
When looking back at my very first posts on this blog, I can already see very clear improvements. My posts have gotten a better outline, containing actual introductions and conclusions. Reading through the posts has become easier. Even though English isn't my first language, you can tell that my grammar and general language skills have improved. This just goes to show how important it is to revise your work after you've written down your thoughts. In a year or two, I will look at this post and think: "I was so naive, look at how much I've improved since then". And those kinds of thoughts are exactly why I write blog posts because measuring progress can be a real motivator.
Laravel, a great PHP framework, became popular because it had great documentation. When you're able to communicate what you can do with a piece of software, people are more likely to pick it up. When you have a great piece of software, but you're the only one that knows how it works, you'll most likely be the only one that will be using it.
This is why I'm making it a point to document a lot and do it well. Of course, I can always improve, which is why I'm going to publish my documentation here as blog posts. This will help me to keep revising and improving upon myself. I revise my blog posts quite a lot throughout a couple of days and I should follow the same process for documentation of my README files.
Any time I write software, I try to document how it works and why the code exists. The reasoning is usually the deciding factor for keeping or replacing pieces of software, so making my intentions clear help when refactoring inevitably becomes necessary. Let's backtrack a little bit...making my intentions clear when writing software has a side effect. This side effect is that other people read your reasoning and might think "Hold on a minute, this can be done much more easily". In this case, my documentation has done its job, because it has made the software better.
Being able to discuss your ideas with peers, in any way you can, be it face-to-face, written or a phone call, helps you iron out your ideas and solutions. So when you get better at communicating, your peers will be able to help you quicker and more effectively. Part of this process is formulating what you're trying to accomplish. If you've been practicing by putting your thoughts into words, this will be much easier than when you've been inside your head the whole time. Sometimes by writing down your thoughts, you've already solved the problem you've had in the first place. So making a habit out of putting thoughts into words, formulating precise questions, and asking your peers for help, will ultimately make you a better developer and colleague.
"Practice makes perfect" is what they say. I have to agree with this. Sure, you can have a lot of talent and be a very good writer, whether it is technical writing or creative writing. When you never write, you won't become a better writer. The same goes for programming, if you only follow tutorials but never actually write a piece of software, you're never going to get better at it. The only way to improve your skills is to put them into practice and repeat, repeat, repeat. With that said, if you want to become a better (written) communicator, you have to...well... communicate. You have to make the mistakes and learn from them. I've made plenty of mistakes trying to improve my communication skills and I've done my best to learn from them and improve myself.
Do you have any tips for me? How can I improve my writing? Let me know on Twitter, I'd love to hear from you!
]]>Testing your code is essential if you want to write code that doesn't break your application. You should use tests as an assurance that your code does what it's supposed to do, nothing more, nothing less. When you have code that interacts with external services, testing this code becomes more difficult. You're not responsible for the accessibility of other services, but theses still influence the state of your application. Any problems in external services, or the connection to those services, will break your tests if they're not operational. This is not (always) representative of the actual state of your application and can cause false negatives. Don't get me wrong, you should put in place scenarios that deal with these problems, but they shouldn't change your testing expectations. One input should trigger a certain response, a response that's predictable.
Some of my unit tests, which were using the Google Geo coding API returned failing assertions. These tests weren't failing because of our written code, but because our company moved to a new office. This meant that our IP address changed, which was why Google invalidated our API token. The token had an IP restriction and we were no longer accessing the services using our white listed IP address. Then I added another layer of complexion when I enabled a VPN connection on my laptop, which invalided my IP address. The fact that my tests were making false assertions, because a service wasn't accessible any more was a sign. The code handled the API responses, but the expected result was never returned. My code was working, but the tests didn't reflect this.
The idea is to build a package to record all API calls you make to external services during a test run. I will do this by saving URL's and/or request headers in a mock file along with the received response. This helps me to run the tests with actual calls to the API only once and return stubbed values in later tests. Since I'm testing the handling of data in my unit tests, I still need to make use of these API results. But the added API call doesn't prove the reliability of the acceptance tests, which is the point of writing tests.
The whole goal of copying the API responses is to get the actual responses, without making API calls. But, if you're using part of the responses, like in my case, an intact response is not the most important part. The most important part is an accurate partial response for the case I'm testing. I need to be able to predict a certain behavior for a certain API response. So by providing a mock with some data, I can mock the exact response I need to test the response of the test script. This allows me to write predictable code and write tests for a single input and a single output.
At this point I'm not quite sure what I'll go with. Once I've implemented a solution, I'll update this post and describe the solution in detail.
]]>Recently I've been quite active with contributing to open source projects on GitHub. Part of the reason is the necessity to move other projects forward and the proposed changes allow me to do so. Another reason is that and I have great admiration for the software packages and would like to contribute to make them better, not just for me, but for everyone else.
My recent contributions were made to the following repositories:
I'm very glad I could contribute in a meaningful way, by actually suggesting internal changes and building them to work for other people as well.
Of course, I've also been working on my own packages, by adding new features, writing tests, and fixing bugs. For some of the packages I've also added an integration with TravisCI to be able to automatically test the packages and make sure everything still works. All of my own packages include:
roelofjan-elsinga/flat-file-cms and roelofjan-elsinga/flat-file-cms-gui are currently my biggest packages. The flat-file-cms is a simple packages that allows you to have a drop-in flat file CMS in Laravel. The flat-file-cms-gui is simply an administration dashboard that allows you to interact with the CMS in a Graphical User Interface (GUI). There will be extra packages to supplement the flat-file-cms, like flat-file-cms-auto-publish and flat-file-cms-seo. These will be added as separate packages, because I'd like to keep the core package clean and focused on the content itself. The GUI package is simply a graphical representation of the core CMS package, and could also be replaced by a completely different graphical implementation. It simply serves as "the official GUI", nothing more, nothing less.
I hope you've gotten better insights into what I've been working on in the past few weeks and I hope you'll come back for a future update. Let me know what you think of this format of blog posts by contacting me on Twitter.
]]>I've taken the first steps in working with event sourcing and in particular, event sourcing in PHP. It's a very confusing concept, but once I got the gist of it, I was convinced of its value. So you might be wondering, what is the biggest value of event sourcing for you? I'll explain those values in this post.
When using event sourcing, as opposed to a traditional CRUD system, you're saving events instead of data. This has the benefit that you can keep track of any and all data changes over time. The key aspect here is over time. Because in a traditional application, you only know the state of the data right now. You don't know what it looked like yesterday or last week, but only what it looks like now. For many cases, this is perfectly fine, but for some other processes, like keeping track of transactions, you need the history of data changes. By using event sourcing, you preserve data. You never make changes to data, you simply amend a new version of the data.
When using a traditional way of keeping track of your data, you only know what your data looks like right now. This makes it very difficult to write reports about things that happened in the past because you don't actually have the data. The thing you'll have to tell your superiors is: I can't do that or it won't be accurate, but I've implemented it and it'll be possible for next quarter. This is a situation you'd rather not find yourself in. Event sourcing allows you to generate reports or projections about anything that has already happened and is recorded, and also about events that still need to take place. This is one of the aspects of event sourcing that really blew my mind when I started to understand the concept.
The fact that event sourcing allows you to record every single event taken place since the beginning, also allows you to look at a situation as if you were in the past. It's very similar to git log, where you can see what has been changed by whom and when. This can also be done with event sourcing and it really helps to be able to understand why some data is the way it is, simply by looking at the changes through time. Event sourcing also allows you to simply choose a desired state of the data and then treat that as the latest version, effectively removing all changes taken place after that situation. This is comparable to reverting a branch to a certain commit in Git.
All-in-all, I'm very impressed with the concept of event sourcing and I hope to implement it more and more in certain cases. The fact that all the valuable data is preserved and you have Git-like abilities with data in some sort of database or file system is very powerful.
I've written this post because I like to keep myself up-to-date about my progress in skills. I've seen great gains in my programming skills since I've started to write blog posts and this motivates me to keep learning.
]]>This post is for developers who make use of polymorphic relationships in Laravel and have noticed some performance issues. This post assumes youâre using MySQL or PostgreSQL. If youâre still reading this, it means you are in this situation and because of that, I wonât delay you any longer.
The query performance has a lot to do with columns being marked as indexes. The primary key, usually the id field, is most likely marked as an index, this means that you can very quickly query a database table for a record with the matching id. However, when youâre not using indexes on fields youâre interested in, the database has to compute your query and look at all records in a table to figure out if it matches your query or not. If youâre using an index on the requested field, the database already knows exactly what you want and can return the requested records very quickly.
This is what weâll do for the polymorphic relationships. If your database tables donât have 100.000 records or more, this wonât really benefit you too much. Your query will be very quick, but you wonât notice too much of a difference. In my case, the table in question had over 4 million records, so it really took me by surprise that the query was slow, because 4 million isnât such a large number that the query should be slow. This is when I noticed the _type and _id columns werenât marked as an index.
So why a composite index? Well in order to query for the related model, you need both the _type and _id column. Together, these two columns form a single relationship, which is why weâre going to create a single index for the combination of the two columns.
Now that you understand what weâre doing, letâs get to the code.
First, make a new migration through make:migration. Below Iâll give you the specific migration configuration I used to create indexes on the activity_log table. In this case, the indexes for the polymorphic relationships on the spatie/activitylog package weren't included out-of-the-box. I've since made a Pull Request to the GitHub repository and this has been approved. So any future users of the package won't have the same problem.
use Illuminate\Support\Facades\Schema;
use Illuminate\Database\Schema\Blueprint;
use Illuminate\Database\Migrations\Migration;
class CreateIndexesOnActivityLogs extends Migration
{
/**
* Run the migrations.
*
* @return void
*/
public function up()
{
Schema::table('activity_log', function (Blueprint $table) {
$table->index(['subject_id', 'subject_type'], 'subject');
$table->index(['causer_id', 'causer_type'], 'causer');
});
}
/**
* Reverse the migrations.
*
* @return void
*/
public function down()
{
Schema::table('activity_log', function (Blueprint $table) {
$table->dropIndex('subject');
$table->dropIndex('causer');
});
}
}
When looking at the up() method, you can see that the first argument passed to $this->index() is an array. This means that Iâm creating an index called subject which contains a combination of subject_id and subject_type. The index called causer contains a combination of causer_id and causer_type. After youâve migrated this migration, you should have very quick queries again.
I hope you found this post useful, it certainly helped me solve some querying problems.
]]>I've started a new blog about plants: Plant care for beginners. It's very different than what I normally write about, but too often I heard that people don't know how to take care of their plants, or "they just keep dying". I've had my fair share of struggles with them, but through a few very simple tricks, I've learned how to keep them alive. So my objective is: "How can I help others to keep their plants alive?".
Since I love writing, this was a pretty simple start: just start a blog with tips about the plants you own and have been able to keep alive over a longer period of time. So far, I've posted two guides on there, so that's very exciting. One thing that was a bit more difficult was reaching the target audience: people that struggle to keep their plants alive. Being a web developer, I thought to check Twitter but soon realized that the target audience isn't active on there. But then I had a thought: "Why do people like plants?". Well, they make you feel calm and happy, they look good, they smell...Wow wow wow⦠they look good! You need visual stimulation...Instagram!
This is when I checked Instagram for my target audience, and there are a lot of them! There aren't just a lot of them, they're also very active! They post their own plants, look at other plants all day, they leave likes and comments with questions and they follow everyone in the community. This was a goldmine! This is when I decided to create a new Instagram account for my blog. One that was fully focused on the plants and the care of them. I didn't want any distractions from my personal Instagram account, they needed to be two separate entities.
I set up the account and switched it to a business account, to be able to have insights about engagements and interactions. I put my blog in the bio and just started posting. I didn't expect a lot of engagement in the beginning, but by the second day, I had 65 followers. In the second day alone I gained 50 followers and this blew my mind. Keep in mind that my personal account has 300 followers, but it took me 3+ years to get that amount. Now I have a third of that in 4 days.
By putting the link to my blog in my bio, I expected to get at least some visitors to my blog and I actually did get a few. The day when I got 50 followers, I had 6 going to my blog. This isn't a lot, but it's interesting to see that little "spike" when something on a different platform happens. I think I've found where I should focus my attention, besides actually writing blog posts of course, and that's posting quality photos and advice on Instagram. This is where most people will benefit from it. If those people go to my blog and find the detailed blog posts, that's amazing, but as long as I was able to help them with their plants, I'm satisfied.
The next steps are to attempt to drive traffic to the blog through helpful comments and advice on Instagram. But using only Instagram isn't enough yet, I'm sure there are more places where my audience has an online presence, I just have to find this. Perhaps they're on Reddit or another platform like this. This is all in the future, but the foundation has been built and from here I'll attempt to grow my audience by doing what I like to do: Help others to keep their plants alive.
Do you have any advice for me what I could do to find the online presence of my audience more easily? Perhaps you have any questions for me about this topic. You can reach me on Twitter or, if you're interested in plants, also on Instagram @plantcareforbeginners.
]]>
All the videos and articles on this subject we're being very positive on waking up early, so I decided to give it a try. I was skeptical before I started, but as soon as I started waking up earlier, I was convinced it was a positive change for me. Here are the most important things I found:
Of course, this list wouldn't be complete with an explanation of these points. You can see the overall theme is that in order to be productive, you need to be motivated. Without motivation, it'll be very difficult for you to make a habit out of this.
You need to look forward to something in order to motivate yourself to get out of bed earlier. A lot of people say they have trouble getting out of bed in the morning. This could be true on most days, there are also days where it's very easy. Think about a really exciting day you can't wait to start. I bet you have no problems jumping out of bed and get started on your day. So really, the biggest problem getting out of bed earlier is lack of motivation. You need to have a certain goal to be able to wake yourself up. Matt D'Avella has an amazing video about this aspect that initially triggered me to wake up earlier. You can find it on YouTube.
Give yourself three very simple tasks the night before. The simpler the better. In my case, I went for things like write 100 words for a blog post or work out for 10 minutes. I could easily complete these tasks on a given day, now I'm just motivating myself to get the first 2-3 tasks for that day done very early on. If you're having trouble coming up with tasks, you can do the dishes or mop the floors. These tasks are something simple, that you can do while listening to a podcast or watching a video and it's just something to get you into doing something. This will lead to you doing more things after these initial tasks. The most important thing is to get started.
All of a sudden I had an additional hour to do things in the morning. I had more time to work on my side project PunchlistHero, write blog posts, and work out. Those are just some of the examples of how I filled in this additional time. This hour allowed me to really focus on something, rather than being distracted by anything or anyone. This also meant that by the time I went to work, I had already completed several of the tasks for that day and I was already awake and ready to go.
If you get up earlier, you'll be tired earlier. That's pretty straightforward. Before I would start to get tired around 23:00 (11 pm), I'm now tired at 22:00 (10 pm). I figured from the beginning that I wasn't a night owl anyway and in the evenings I'm often very unproductive. This meant that I took an hour away from the unproductive part of my day and gave it to the productive part.
I've been doing my best to get at least 7 hours of sleep, ideally 8 hours. By turning on alarms on the weekends I'm trying to reduce "time in bed" difference between the weekdays and the weekends. This way I'm able to keep a fairly consistent day/night cycle. This makes it much easier to wake up early as well. After a week or two, I started to wake up by myself, sometimes even slightly before my alarm went off. After a while I didn't need to specifically set the 2-3 goals the night before anymore, waking up early had become a habit.
Before I could be a little absent at times, because I was thinking about something, but now I can get more of that done in the morning. I'm sorting out my thoughts in the early morning and be done with it throughout the day. This has allowed me to be more present during meetings and while working on complicated tasks.
The extra time in the morning has allowed me to work on my side projects and finish tasks, allowing me to not having to worry about them during my work hours.
In the spring and summer months, it's quite easy to get up early. The sun is already up and this helps you to wake up more quickly. In the winter, this is a bit of a problem. Getting up in the winter means that you get up when the sun is still down so it's pitch black outside. This makes waking up the natural way quite difficult. I came up with a simple solution for this: sunlight LED strips.
This sounds strange, but my home office has LED strips on the ceiling. When turning them on, it almost feels like it's actual sunlight. Using this method, I've been able to wake up quite easily in the winter and fall months. Now that's it's light again when I wake up (April), I no longer have to use the LED strips in order to wake up. I can simply open the curtains and see the sunlight.
How did it go? How was the experience for you? If it was positive, what did you use all of this extra time for? If it was negative, why didn't it go the way you expected to? Let me know on Twitter!
]]>Most people know Ubuntu as a server operating system (OS), however, it can also be used as a desktop environment. This post describes several reasons why you should use Ubuntu as a desktop OS over something like Windows or Mac. Here are some of the reasons I chose to use Ubuntu as my primary OS instead of Windows which was what I used before:
Installing applications on Ubuntu is done in a similar way to Android and iOS, you download applications through an app store (also called repository). If you want to install any additional applications you have to choose an application through in this repository, however, you could add third-party repositories for additional applications. This means that you control over which applications are installed from which sources. This also means that these applications have official support and are deemed secure by the Ubuntu core developers. If you want to add any additional applications from extra repositories, you will have to trust that these repositories are not harmful. If you don't trust a repository you simply don't add it to your system. You are in control over the source of the application and not someone else.
Because the Ubuntu community is really large, there is a lot of support in case you have any questions. This means that any problems you might face are likely solved in the past and you can simply reproduce the steps others took to solve the problem. Because the community is so large, applications are regularly updated in order to provide better security and to work in a better and more efficient way. This also means that applications are supported for a long time before they're deprecated for a newer version. Ubuntu has LTS (Long time support) versions which are supported for 5 years, which means you can use the same system for 5 years before you'll need to upgrade to a newer version.
When installing Ubuntu, you have the opportunity to install third-party libraries. You can choose to not include any of these libraries and install only the bare minimum. If you only install the bare minimum, you will have a very clean and streamlined version of the OS. From this clean base, you can install anything you want, without starting out with a lot of bloat and system applications.
When I got a new laptop, Windows was preinstalled, so when booting the system I had to go through the installation and setup process. This process took nearly 45 minutes and this annoyed me. I wanted to start the laptop and instantly get to the task at hand, but because this process took so long, I lost all focus and motivation. When I did finally get into the desktop environment, I was shocked how many applications were preinstalled. The start menu was completely filled with all kinds of nonsense and 30-40 applications were already installed in the system waiting for me to start using them. It's clear they're serving a target audience that's not me, a software developer, but more people that are media consumers.
I had a similar experience when installing Mac OSX. There were so many system applications installed that I couldn't remove. I knew I would never use some of the programs and not having the ability to remove these applications was a burden on me. That is when I fully understood why Ubuntu has a overall higher performance than these two operating systems. It has less bloat installed and any applications you know you will never use, you can simply remove to free some resources for the things that matter to you.
The Ubuntu community is very large, so if you want to change anything about the operating system, you can find a way. For example, if you don't like the default desktop environment, you can install a completely different one. You can simply install KDE or Xfce if you don't like the GNOME or Unity environment. If you do like the default environment, but want to change it's appearance and functionality a little bit, you can install "Unity Tweak tools" and change anything you want to.
Don't like the default file manager? Install another! Don't like the way you have to navigate from folder to folder in Nautilus (file manager), run a command in the Terminal to be able to type the path you want to go to. Because the community is large and active, you will be able to very quickly find out how to do this.
Any Linux distribution (distro) is free of charge unless you want to use some kind of enterprise OS like Red Hat. This means there is no reason not to try some of them for yourself. Most distros even have a Live USB mode, which means you can try the distro without installing it on your system. You can simply run it from a USB drive. The fact that it's free of charge has allowed me to revive a few old laptops that were running Windows, but were either corrupted, too slow or just not working properly anymore. The Live USB mode has allowed me to install a clean operating system, that's running very smoothly on old, left for dead hardware. These particular laptops now have a new life and are being used again. This means I didn't have to invest money to buy another laptop or pay for another Windows license.
What are the reasons you started to use Ubuntu? If you haven't used it yet, why not? Let me know on Twitter because I love hearing the stories of others regarding this amazing operating system.
]]>When I moved from Medium to my personal blog, I didn't just leave the platform with all of its built-in sharing opportunities behind. I also left the comments behind. This was done intentionally and I'm writing this to explain why I've left out a comment system. If you don't want to read all of this, I can totally understand! Let me summarize it for you:
For those of you who want to find out what I mean with these 4 points, keep on reading!
Social media has changed over the past few years. From sharing your own life and your interests with others to reaching the most people in the shortest possible time. I'm personally not a huge fan of reaching X amount of likes or receiving Y amount of comments. Receiving a single thoughtful comment means a lot more than receiving a thousand emoji comments. Reaching many people helps you to build a brand quite easily, but at the same time, it makes social media less social. It's being used for commercial purposes and is becoming less personal. To take back social media, and actually share part of myself, I've taken away the ability to quantify meaningful interactions and put the focus back on the content.
If the entire purpose of writing blog posts is to have others read it, you should take a step back. Especially when you're just starting out, no one will read your posts. So you shouldn't use views, likes, or comments as a quantifier of your writing skills. If you do, you'll get discouraged quickly, even though your content may be incredibly good. You should create content because the act of creating content is fun to you. The views, likes, and comments will follow if you're consistently posting great content.
This is exactly why I don't track views, likes, comments and other metrics. The only thing I track is which posts get the most attention. I track these posts because this means I have an opportunity to share my personal story about those specific topics more often. These views are never the holy grail though. I've written multiple posts about subjects that I know haven't really done well in the past, according to the metrics of the Medium platform. I wrote about a subject again, because it was fun to write about and I found the topic to be interesting.
When others read my posts, I'm loving it, but when they don't, I don't get discouraged. If I can read my own post later on and help myself solve some kind of problem, that's all I need. Ultimately, I'm writing for myself, be it for my own entertainment, to get better at writing, or to learn to help others. If I've been able to put my thoughts into coherent sentences that tell an interesting story, I'm satisfied.
Imagine if I added a comment system to this post and someone asks me a question. That's pretty cool, right? Now imagine that I answer in a very thoughtful way to help this person and I spend time on my comment to make sure I get my point across. But this person won't be notified and will likely never check this post again. Well, now both sides have wasted their time. They've thought of questions to ask and I've answered in a thoughtful way, but it was all for nothing.
In other words: there are many other and better ways to interact with me that will most likely be much better suited for this purpose. My website contains my e-mail. If you have a question or remarks, send me an e-mail and you're guaranteed to receive an answer. In addition, you'll be notified when I've answered your message because the answer will be in your inbox. Every single post contains a link to my Twitter profile, where I can be reached most of the day. You can send me a tweet on there or a direct message and you'll be guaranteed to receive an answer.
In short, there are many other channels to reach me, so it's pointless for me to spend the time to add a comment system that won't be used to its full potential. There are already too many channels to keep track of. It's gotten to a point where I gave up on Facebook and Instagram because my time is saturated with other channels. This is one of the reasons those accounts aren't listed on my website.
In my post "SEO and personal marketing for developers" I mentioned that I moved away from Medium because I wanted to own all of my own content. I moved everything to a platform that I owned and by doing so, present my posts in the exact way I wanted to.
Well, a comment is also content on a page, even if they're not written by me. I don't (really) control these comments and that just wouldn't sit right with me. I'm not a person for censoring comments that people would leave on my posts. This means people could leave whatever they wanted on the posts I've spent time on writing. I don't even want to think of the headaches this could cause in the long run. This is why I just opted to not have comments. This takes away the pain of "policing" the comment section. If you really want to send me a public message, send me a tweet. If you want to send me any private messages, there is twitter and e-mail.
If you've gotten this far, hello, thank you for reading this post! I appreciate that you took the time to share these short few minutes with my content. This section may be a bit redundant, but if you have any questions or remarks, I'd like to direct you to my Twitter profile or to the homepage of my website, where you'll find my e-mail. If you are using comments on your own blog posts, why? If you don't, what are your reasons? I'd love to hear from you!
]]>Understand & learn about different configuration file types available to setup your project in workspace.
Configuration, people love it and people hate it. You can change the behavior of your application with it and customize it to your needs. When this is over lunch complicated, you get frustrated if there is no documentation. So how do you choose which file types to use for this? There is no easy answer to this, so let me break it down a little bit. In this post, I'm going to highlight four different file types that I have used and will use for these kinds of tasks. These file types are JSON, YAML, XML, and dotenv.
The first file type I'll highlight is JSON. JSON is very popular if you need to share data between different programming languages, even different applications. It's the go-to method for data transfers between modern API's. It's compact, easy to read, and all major programming languages can parse it without any problems. This is a very simple way to get started quickly.
However, there are disadvantages to using JSON as well: You can't use comments in a JSON file or JSON structure. This means that you will need to write documentation for your data structure. Writing documentation is a good thing anyway, but you don't have the opportunity to clarify any data in the data itself.
I would use JSON files for very simple configurations and settings that you want to be able to parse quickly, without much effort.
An example of JSON configuration can be found at the top of this post.
YAML is a compact and yet a readable version of XML, which allows for objects and arrays. This makes it useful if you're used to JSON because you can emulate the same data structures in both file formats. Unlike JSON, you can actually use comments in your configuration files, allowing for inline documentation, possible configuration options, and altogether a more seamless experience for developers.
Of course, all good things also have disadvantages. Not all programming languages have native support for parsing the files. Most, if not all languages will have additional libraries you can install to parse these files though. So you're not completely stranded when you want to use YAML, but your programing language doesn't support it. It also has quite a steep learning curve for writing properly formatted files. If you're used to C type languages, this will be a difficult transition. Like Python, YAML needs to be indented properly to work correctly. If you accidentally indent a line in a different way than the parse expects, it might assign the chosen properties to either a parent or child object.
I would use YAML for more complex kinds of configuration. It's ability to contain comments, yet still be compact allows you to quickly write something new and document this. However, I wouldn't use this for simple configurations, because it takes a bit of effort to get it to work.
XML, the markup languages a lot of people love to dismiss instantly. "It's old fashioned, get it out of my face!". However, because it's been around for a while, it has proven to be very reliable and this also helped to include parsers for it in a lot of languages. A lot of languages either have native built-in parsers for it, or there are extensions and libraries for that you can use to extract data from it. It also allows for comments, so you can inline all the needed documentation if you so choose. It looks like HTML, which makes it easier to understand than JSON or YAML.
There are some dates as well. The configuration files are much larger in size then JSON or YAML. This isn't a problem if you don't have a lot of data or if you won't be sharing it with anyone. So files size could be relevant or irrelevant depending on your situation. XML parsers are more difficult to use than JSON or YAML. Every time I have to parse the data in PHP I get a little overwhelmed by how complex the parser actually is. After a while you understand why it works this way though, so it will get better. XML files have quite a steep learning curve to writing proper XML. A simple mistake could invalidate your whole XML file. Looking at examples and experimenting with this will be useful.
I would use XML for simple, but also very complex data structures. It's very simple to create a hierarchy and to add properties to itself. Most languages have native parsers for it, so you could get started right away. You can make these files as simple or as difficult as you want. It won't be the most readable data, but if you're used to HTML, you will understand what's going on.
Dotenv or .env are by far the simplest configuration files you can think of. These are technically used as configuration files for a specific environment, but you can change a lot of behavior with the values it holds. Dotenv files are usually specific to a single environment and shouldn't be saved in version control. You can use comments in dotenv files, but since you most likely won't be sharing these with anyone else, this will be for your own benefit and not for others. This type of configuration has a very simple key-value format.
There are a few disadvantages to using dotenv files for configurations. The first is that all keys need to be unique and all values are a simple string. So there is no way to save objects or arrays with this. Another disadvantage is that you shouldn't add this in version control. This means you could have completely different configurations in each environment. This sounds bad but is also one of its strengths.
Dotenv files shouldn't be used for any complex configurations. It should be used for configuring connections to external services, hold usernames and passwords, and be used to keep track of the current application environment. This is what it's great for, but nothing more complicated.
If you're looking for a nice way to store any complex configurations, choose one of the first three. If you're looking to keep track of simple data, choose a dotenv file. Are you using any other file type for configuration? If so, why are you using this file type specifically? I'd love to hear your take on this subject! Let me know on Twitter what you use to configure your applications.
]]>In a previous post, SEO and personal marketing for developers, I mentioned that you need to generate a sitemap in order to submit all the important pages from your website to the Google Search Console. But how do you generate a sitemap? What does it look like? These are the questions I'll answer in this post.
Before I start, I'd like to show you an example of a sitemap file. It's really quite simple and it's easy to add new urls to it.
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
<url>
<loc>https://example.com/</loc>
<lastmod>2019-01-01</lastmod>
<changefreq>monthly</changefreq>
<priority>1</priority>
</url>
</urlset>
That's it, that's all you need to do to create a sitemap. As you can see, an "urlset" element is wrapping everything. Then you have the "url" element. This element contains all information about a single URL, like the URL (found in the loc element), the last modified date (lastmod), the page priority (priority), and the change frequency of the page (changefreq).
After reading through the information in the previous two sections, you can get started creating your own sitemap. You can simply make this manually if you don't have a large number of pages you want to include in your sitemap. If you have a lot of pages, this could be a lot of work and you can use an automated service for this. If you have the opportunity to write a script to do this automatically for you in PHP, you can go to the next section.
If you're using PHP for your website, you could make use of a package I've created for this specific use-case. You can find it on Packagist and install it with composer:
composer require roelofjan-elsinga/sitemap-generator
You can incorporate that package in any script you might be using to create a sitemap. After you've added all of your links, you can save the generated XML to a sitemap.xml file that's accessible through the browser.
The easiest location to place your generated sitemap is at "yourwebsite.com/sitemap.xml". This is a very predictable place for it and you want to make it as simple as possible to index all of your URL's. After you've placed the sitemap file in the correct location, verify if you can access the file from the browser by going to "yourwebsite.com/path/to/sitemap.xml". If you're seeing your URL's correct, you're ready to go to the next step.
Now that you have a sitemap, you're ready for this last step. Submitting your sitemap to Google Search Console. This step is quite simple luckily. First, make sure you've set up Google Search Console for your website, you can find out how by reading "SEO and personal marketing for developers". When you are in the search console, click on "Sitemaps" in the sidebar on the left. Here you can enter the URL of the sitemap. Mine would be "roelofjanelsinga.com/sitemap.xml". My domain is already entered in the form, so all I have to fill out is sitemap.xml. That's it, Google can now find all of your pages and index them into the search systems.
If you have any questions or any additions to this post, let me know on Twitter! I'm happy to help you or make changes to this post if you caught a mistake or have some better information I can add.
]]>A database is a go-to way to store data for most developers, and for a great reason: It's really great at retaining data. Then why did I decide to not use a database for my blog and instead opt for JSON/YAML/Markdown files? Simple! Portability, version control, and performance. Oh, and it's fun to learn something new...
When I did a redesign of my old website, it was still using a database. I hadn't worked on the website for about 2 years. I pulled the code from Github and tried to launch it on my local machine. I got it to work, but obviously, I didn't have any data for it to display. I didn't have a local installation of MySQL and didn't find a good reason to install a database engine, download a database, and import it just for 5-6 previous work records and about 30 content blocks that I was going to replace anyway. So I decided to use Markdown for the previous work and just get rid of the database altogether.
This meant that no matter where I opened the local version of my blog, I had all my content available without any hurdles. There was no need for an external system, just a Laravel application with a few content files. This means I have a consistent development and production environment and I can set up an identical blog in another place in about 2 minutes without any configuration.
Working with files instead of a CMS with a database, allows me to use any file type I want. I chose to use Markdown files for my content. Only having to care about the importance of titles, texts, and other basic content types is very liberating. When working with any other CMS I've always felt like I was bound to HTML. If I wanted to add another paragraph, I had to either use a great editor to generate this for me or manually write HTML elements. This got very tedious, slowly stopping me from creating content altogether. This is very sad because I love creating content, but the means I had to go through to create it just sucked the joy out of it for me. Being able to use markdown and just completely letting go of this has rejuvenated my pleasure of creating content.
All my content is kept in files, which means you can keep these files in some kind of version control. This is probably one of my favorite "features" of this project. I can see exactly when I've made changes to my posts, as you would in WordPress, but without any database. I have a wide range of options for a Git GUI, or just the command line if that's what I feel like at that moment. I can edit any of my posts on any system that supports Git, and have it available on another system if and when I need it. This might sound like a silly gimmick to you, but I write my posts on 3 devices at any point in time.
Fetching data from a database has been the biggest bottleneck of any of my projects. This could be due to sloppy query design, but often it has to do with the fact that your system is requesting an external service for some data. Even if the database is on the same machine, there could be a slight delay between fetching and receiving data. When you have a remote database, you will instantly notice a performance drop, because data is fetched through an internet connection. There are simply too many variables for me, especially for a simple blog. The application just needs to read data and display it to the user, adding an external dependency for this seemed like unnecessary complexity.
Having all content on the same storage device as the application makes reading the data near instant. It lets you write the content in whatever way you find the easiest to work with. I chose to write some of the configurations in JSON and some in YAML and I can do this because I have absolute control over the way I decided to save my content. You can make this as simple or as complicated as you want yourself. This way you can very quickly add or change content in a way you're comfortable with.
If I wanted to do the same old thing, I would've used a database. But then I would've missed out on a lot of learning opportunities. Because by restricting myself by not allowing myself to use a database, I learned to parse YAML files and handle data saved in other file types and use it however I see fit. I feel like I'm in absolute control over my own content, no matter which device I'm working on and this is very freeing and makes creating content a true pleasure.
Have you ever worked on a project that didn't use a traditional database to store content? What are your experiences with it? Did you enjoy it or absolutely despises it? Let me know on Twitter!
]]>In January of 2019, I stopped posting my blog posts on Medium and started to post them on my own website. This was primarily because I like to own my own content and be in control over every aspect of it. Moving away from Medium meant that I lost the vast audience of the Medium platform, so I had to capture this attention myself if I want my posts to be read. Here's what I've done to accomplish this.
If you want your content to show up in the best way possible, you will have to set up all your meta tags correctly. This means including meta tags for Google, Facebook, Twitter, and other platforms that you may be using or marketing to. You can find the tags I'm using by checking the page source, but for those of you on a mobile phone, here's a snippet of it for my last post:
<meta name="keywords" content="How,I,reduced,my,docker,image,by,55%">
<meta name="description" content="This is where your description goes">
<meta name="author" content="Roelof Jan Elsinga">
<link rel="author" href="https://plus.google.com/u/0/+RoelofJanElsinga"/>
<meta property="og:title" content="How I reduced my docker image by 55% - Roelof Jan Elsinga"/>
<meta property="og:type" content="website"/>
<meta property="og:image" content="https://roelofjanelsinga.com/images/articles/steel_tower.jpeg"/>
<meta property="og:url" content="https://roelofjanelsinga.com/articles/how-i-reduced-my-docker-image-by-55-percent"/>
<meta property="og:description" content="This is where your description goes"/>
<meta name="twitter:card" content="summary_large_image">
<meta name="twitter:url" content="https://roelofjanelsinga.com/articles/how-i-reduced-my-docker-image-by-55-percent">
<meta name="twitter:title" content="How I reduced my docker image by 55% - Roelof Jan Elsinga">
<meta name="twitter:description" content="This is where your description goes">
<meta name="twitter:image" content="https://roelofjanelsinga.com/images/articles/steel_tower.jpeg">
<title>How I reduced my docker image by 55% - Roelof Jan Elsinga</title>
As you can see, there aren't a lot of different types of information you need, it's just a matter of finding the right tag name.
You want to make it as easy as possible for Google to find your blog posts. A great way to do this is to make a sitemap and submit this to the Google Search Console. In the next section, I'll explain how you can do this. An example of a sitemap for your posts can be found on my website, have a look at my sitemap and you'll find that all my blog posts, including this one, has been entered into it.
The sitemap you created in the last section needs to be submitted to Google, so let's get started with this. First, sign up for Google Analytics and add the verification HTML file they provide you with to your website. The steps in this process are well explained, so I won't go into it here.
When you've signed up for Google Analytics, you should sign up for Google Search Console. Google Analytics is used to track your page views and different user behaviors, while Google Search Console allows you to submit new pages to the Google index, it'll give you insights on how people find your website and a lot of other useful things for promoting your website. If you're having trouble in this process, this post by Yoast should help you "How to add your website to Google Search Console".
Your readers most likely won't be checking your website every single day to check if there is a new blog post. A lot of other tech blogs I follow actually let you know when there is a new post, through an RSS feed. Setting one up allows your readers to be notified when you post a new post, that's free marketing for you. If you want to see an example of what this looks like (because I did and couldn't find a good one), look at the feed I've set up for my blog. You'll see a lot of XML appear, this is the feed. People will be able to subscribe to this feed through an RSS reader of some sort. When you post a new blog post, you should update this feed, so people get notified. You can add as much or as little information in there as you want.
As I've noted in the previous section, your readers won't be checking your website every day to see if there is a new post. Even if you have an RSS feed, people may not want to subscribe to it, or are unable to do so for some reason. Another way to notify people that you've posted something new is by sending them an e-mail.
I've done this through MailChimp. If you sign up for my mailing list, you'll be notified (max of 1 time per week) about the posts I've posted in the past week. This is all done automatically, because MailChimp can read my RSS feed and generate a newsletter for me. You can do this as well and here's how you do it:
Follow this article to see what you need to do to set up an automated chain in Mailchimp: "Share Your Blog Posts with Mailchimp". When you get to the stage where you need to create a template, you might get confused about how to actually automatically get the article in your e-mail. Let me show you the template for my own e-mail:
This looks a bit weird, but these are called RSS merge tags. You can find many more if you Google a little bit. I'm posting this here because when I was setting this up, I had no clue what to do. There wasn't a great example out there.
With those merge tags in place let's have Mailchimp generate a preview of the e-mail we'll be sending to our subscribers:
This is the e-mail Mailchimp automatically generated for us. This is my newest blog post (at the time of writing) and it's the only blog post in that week. If there were more published posts for the past week, it'll show all of them in this e-mail. As you can see, the *|RSS:RECENT|* tag has been replaced with links to my recent blog posts. So now I can notify anyone subscribed to my mailing list about any new blog posts, without having to do anything for it.
After all of those automatic solutions, there is still a little bit of manual work to be done. After publishing your posts, you should share them on your social media channels. If you're really not into doing manual work, there are always ways to do this automatically but I prefer doing this manually. Of course, you'll need to pick your platform and audience. If you have a lot of friends on Facebook, but none of them are likely to see any benefit of reading your blog post, perhaps Facebook isn't the right place to share your blog posts. For this reason, I only share my posts on Twitter and LinkedIn. This is where I find my target audience (my peers, developers, business people, etc.).
But if you're completely clueless whether people are reading your posts on the different social media channels, share it on there and see what happens. You have Google Analytics enabled on your website, so you'll be able to see where your visitors are coming from. Perhaps you find a new platform that really loves to read your posts this way!
Do you have any other steps you feel I need to include in this post? Let me know on Twitter! I'm still learning new things about this process every day, so any new insights are appreciated. If you want to be notified when I publish new posts, subscribe to my mailing list or to my RSS feed!
]]>I listen to podcasts almost every single day, so I've compiled a list of my top 10 favorite ones. I usually listen to them on my way to and from work and they're an excellent way to learn something by just listening to some knowledgable people speak.
As you can see, there is quite a pattern in those different podcasts: programming and business. I mean there are two different ones, an interest of mine (space, science) and a podcast about relationships.
These podcasts keep me up with topics in the development community and help me shape a business around a product. I listen to all the business podcasts because I'd love to start my own company at some point. I listen to the podcasts about building your own business, because I would not want to get investors and answer to others about MY business.
The programming podcasts from this list are the following:
These go into developer experiences, new programming techniques, how to test and how to deal with certain problems. These really help to explain some topics or solve some of the problems I have on a day-to-day basis.
The business podcasts that I listen to are all about the business itself, starting a business, and running a business efficiently. The ones from my top 10 about business are:
I'm interested in starting a business at some point and these podcasts highlight do's and don'ts for doing so. The overall theme is to be patient and to market the business early on. One of my goals is to never have any outside investors because I don't want to answer to anyone but myself about my own business. Investors give you a nice boost, but if your business is sound and you can make the money yourself, through clients, it's a much better option. Because all you do is to serve your clients better. If you take outside investments, you have to serve your investors as well and this won't always benefit the people that are actually paying for your service.
So if you're interested in business or building a business, definitely give these podcasts a try.
Star Talk Radio is my go-to podcast for anything science related. They talk about a range of topics within the science community. Most of the podcasts are about something space or space travel related, but there are definitely a good amount of episodes on other topics.
Science has been interesting to me for a long time. Unfortunately, that interest started after I was able to take any science classes in school. I now learn about new developments through my own research and reading books. Being able to figure out what a Quantum Particle is, is pretty exciting.
In "Couples therapy with Candice and Casey", obviously you hear about Candice and Casey's relationship, but it's more. When listening to it, I'm thinking of ways that I could better my relationship or different ways to communicate certain things. It's interesting to see another couple go through certain processes and learn from their mistakes.
Do you have any good podcasts you listen to occasionally? Do you listen to any of these podcasts, if so, what do you think of them?
]]>A smaller docker image has all kinds of benefits, for one, it'll download more quickly when you're deploying your application to a new location, or you're deploying an updated image to existing applications. Being able to quickly updated images is very important. Besides that, keeping images clean and not bloated is important for properly working, and responsive containers. Read on to find out how I reduced my docker image from 1.04gb to 555mb.
With that said, I started out with a very bloated Ubuntu 18.04 base image for my main docker image. This image contained a lot of debugging packages, and packages I just plain wasn't using anymore. This caused the built image to be 1.04gb, which is quite large, especially for a single component in a network of services. I noticed a lot of processes that were either slowing down over time or were slower than I expected them to be.
So in my search through the internet to ways of improving the performance, I found three simple solutions I could apply right away and these solutions have reduced the image size by 55%. These were:
Use a smaller base image than a full ubuntu:18.04 image. Since Ubuntu is largely based on Debian, I thought the logical choice was to use debian:9.7. This change alone brought the image size down to 860mb. This was already a huge reduction, but I wasn't satisfied yet. When changing this to debian:9.7-slim the image was 600mb, another huge reduction.
The second solution to the problem was to simply clean out all temporary files when using the apt-get install command. This reduced the size of the image, but not by a lot, this saved me about 20mb, so the size was now 580mb. To take advantage of this, add the commands below to every apt-get install command and this will get rid of all temporary files.
apt-get clean &&
rm -rf /var/lib/apt/lists/\* /tmp/\* /var/tmp/*
Your operating system loves to make installing packages very simply, but installing all recommended packages it needs to run without any problems, also in a docker image. You can disable this, and you really should. By adding the --no-install-recommends flag to your apt-get install commands, It'll only install the bare minimum needed to run. This means that you may have to install a few packages manually, but you get rid of a lot of bloatware. This brought my image size down to its final 555mb.
Do you have any more tips on reducing docker images further? Make sure to contact me on Twitter! I can always use advice on these matters, as I'm still learning new things every single day.
]]>As it turns out, when using a Polygon or MultiPolygon for searching on a SpatialField with IsWithin(), you can't use a square shape. Unless you use it in a counter-clockwise manner, which didn't work for me. According to the WKT standards, a square is not a valid shape, so to solve this problem, simply add two points in the middle of the longitude line.
My initial solution was a self-closing shape that only had its four corners defined. But this either returned errors or gave me no results. This means that
MULTIPOLYGON(
(
(
179 85.05112877980659,
179 -85.05112877980659,
-179 -85.05112877980659,
-179 85.05112877980659,
179 85.05112877980659
)
)
)
which is a self-closing square, gives an error. When using values like 175 and -175, which are not good enough for my case, you don't get an error, but I simply didn't get any search results.
But (notice the two extra points: 0 -85.05112877980659 and 0 85.05112877980659)
MULTIPOLYGON(
(
(
179 85.05112877980659,
179 -85.05112877980659,
0 -85.05112877980659,
-179 -85.05112877980659,
-179 85.05112877980659,
0 85.05112877980659,
179 85.05112877980659
)
)
)
is completely valid and will get you the results you want.
The reason I'm not using -180 to 180 and -90 to 90 is that the values I used are the maximum values Google uses for its maps. I use Google maps as an input for saving Polygons and MultiPolygons, so there is no point in going past those maximum values.
I wasted three hours on this, so you don't have to! Let me know on Twitter if you've ever been stuck on a bug like this that seems easy, but you end up spending hours on it anyway!
]]>
Programming languages are evolving lightning fast, businesses are ever demanding, and employees are being pushed to the edge. Today's businesses are increasingly built to push people towards a burnout, and it's tragic how people seem to accept this to be normal. Employers expect their employees to slave away to make (unrealistic) deadlines, instead of scaling down scopes to make the deadline more realistic. SPOILER ALERT: There is a positive message in this post, keep reading. Also, the advice is at the bottom.
I've been on the brink of a burnout three times in 2018, three times. After the third time, I stopped accepting the fact that nothing was being done to prevent this from happening. So I took matters into my own hands and learned to say no. "Can you do this for me right now?" "No, I'm working on something else right now. I'll get to your task after I've finished mine." This helped, but also caused irritation and is not sustainable in the long run. Sometimes tasks just have to be done "right now".
To be able to keep up with this speed, I had to find hobbies that had nothing to do with computers or sitting still in the same place for a longer period of time. I started to do things outside, just anything, and this worked really well. But obviously didn't solve the root of the problem. The root of the problem was that work was draining and unpleasant. That's where I've tried to work with other departments to make it better for everyone.
To avoid irritation between the departments, I've tried to make clear that every single time we're being interrupted with a question it doesn't take just us the time to listen and answer the question to get back to what we were doing. It takes an additional 5â20 minutes, depending on how challenging the task is we're working on, to get back to work. To put this in perspective: we have three developers in one room if one of them gets asked a question, 3 x 5â20 minutes gets wasted. So the solution (for now) is to send the question through slack, this will still disrupt one person's concentration, but at least not all three.
To come up with some ways to solve this problem, we had a team meeting. We're asking tough questions and expect tough answers. So for example: what didn't go so well this week and what would need to happen to make this better next week? Putting all frustrations on the table has, ironically enough, made the team tighter and work better together.
We've concluded that we're all feeling very similar about our current work situation and that we should put in an effort to get more work done while being less stressed, and having a good working relationship with your colleagues. The main goal: how can we do this together?
Make communication asynchronous, have quiet periods of time in the office, make it clear that interruptions are unacceptable. Those are just some of the solutions we've worked out.
One of the things that distracted us and often did more harm than good is the constant synchronous communication between everyone. Sending files and finding them later one was impossible.
Since we've moved away from Slack, we've been able to work much more efficiently. Nobody expects an answer right away anymore and instead just waits until the other person has some free time to check the messages and formulates a thoughtful message. The fact that you can upload files in a specific spot, instead of a chronological chat, it helps to avoid irritation. "I sent you that last week", doesn't really happen anymore. The internal communication has become much more pleasant.
To minimize the interruptions, even more, we've worked out a few hours per day when it's quiet. No talking, no interruptions, quietness. We've implemented library rules (quiet times) in the morning hours and the late afternoon, so when people get to work and leave to go home, it's quiet. These were always huge moments of interruption because there is a lot going on. But now it's quiet and people can work on things. This really helps to focus on some of the larger tasks, while still giving people a chance to talk during the hours in the middle of the workday.
We've also made it clear to everyone, that interruptions are unacceptable. Everyone's time is valuable and you have no right to decide that your time is more important than others'. If you put it in this perspective, people will think twice about interrupting you. So far, it's helped a lot, people just send messages, and e-mails instead of coming to your desk, and this is great.
There is no golden rule, but there are definitely things you can do to make it less severe.
If you've ever had burnout, what have you done to make it go away? I'd like to hear from you! Contact me on Twitter and share your story.
I'd like to give a huge shout out to my coworkers for embracing the changes I've implemented. It makes a work day much more productive and pleasant. Since the start of writing this post and with all the implemented changes, I haven't felt anything but productive at work.
]]>I want to build a company, never take any outside investments, and build a livelihood for any employees I'll have (at some point). Work should be work, and free time should be free time. When you're working, you should be able to work uninterrupted and have a great working day, but then, when you're done, you have free time. During this free time, you shouldn't work, think about work, or be contacted about work. Here's how I would manage this:
40 hours of work per week is plenty of time to get a great amount of work done, but most people get interrupted too much to be able to actually work that amount of time. I think most people actually really work for 10 hours per week and never actually get close to 40. It's not just in the interest of employees to work uninterrupted either. Think about it, as an owner, do you really want to pay for 40 hours if you only get 10 hours of work? I didn't think so.
The company culture will promote personal productivity, but at the same time making sure that you're not working (including thinking about work) during your off-time, weekends, and holidays. This means that every individual gets the chance to work how they want to work, where they want to work, and at what time. It also doesn't matter how short or long you spend on a task, as long as it's done at the deadline.
There will be no long projects because long projects drain anyone's motivation. The longest project will take 2 weeks. This seems very short, but any feature/project has a bare minimum. If it turns out that you need 4 weeks to complete the "full version" of the feature, start stripping the "nice to have" aspects and only build the bare minimum. Through iteration, you can always add the "nice-to-have" features at a later stage. The bare minimum is no excuse for a non-working feature but challenges you to prioritize your work and skip all the bloat.
]]>When working on tasks during a work day, you can often get distracted by other tasks that need to be completed while you're working on something else. As this keeps going on for a while, you'll have 10 tasks, of which you completed none. How will you feel at the end of the day? You'll feel like you don't know why you were at work, you'll feel like you haven't done anything all day.
This is why you need to say NO more often. But saying NO alone won't solve the problem. You need to defend your work time, one task at a time. Only new tasks when the previous task is completed. When the second task is "urgent" and "very important", take a step back and ask questions. Make sure the new task is really THAT important that it needs to be done "right now". Prioritize based on the task, not on who told you to do the task.
By arguing the task itself, and not worrying about who told you to do the task, you'll be able to avoid the "but the boss told me to do this". Managers and leaders give out tasks, but don't always know what the task actually involves. Don't just accept those tasks, but ask questions about it. The goal is to prioritize the task. Maybe the task you're currently working on will already take care of the new task, maybe your current task has much more impact. Don't simply assume because a manager says a task is important, it actually is. Don't ignore the task, however. Be prepared to explain your reasoning. Remember, they're still in charge of you.
Saying NO doesn't just apply to day-to-day tasks. You have to learn to say no, as a company, to some customer wishes. Sometimes saying YES to everything will get you into trouble. You can take on too much work and don't have enough employees to complete the work. Sometimes saying YES just compromises your integrity, your ethics, and your office politics. Saying NO will be the right choice in these situations. Sometimes your company is just not suited for a specific customer need and they'd be better off going to another company with their business.
And as a last piece of advice, it's better to say you won't be able to do something, and then actually do it anyway, then saying that you'll do something and never get to it. If you say NO and do it anyway, you're seen as "You're awesome for doing this even though you didn't really have time for it". That's what you want right? You don't want to hear "You said you'd do this for me and now you still haven't done it". Saying NO on a majority of the requests will help you to control the expectations put on you. If you say YES all the time, you'll have too many people depending on you at the same time and you'll most likely let a majority of them down. Try to avoid this at all costs, just say no.
]]>Disclaimer: I just like to learn new languages, I donât actually have any degrees for them.
Learning a new language is very exciting. I like to do it because it puts my own language to the test. I like to compare the grammar and the words when and where I can. This helps me learn to use the language in writing and speech. The language Iâm currently learning is Norwegian, so letâs focus on that one in particular in this post.
I speak Dutch natively and English fluently, and a bit of German here and there. These three languages have a few words in common every once in a while, which helped while learning them. About two years ago I decided itâd be fun to learn a third language fluently (not counting German here, because Iâm far from fluent). At the time I was fascinated with Vikings, both the sagas and the television show. I wanted to be able to understand them, so I figured out the language they spoke was old Norse. The closest thing to old Norse is Icelandic, but since the resources to learn Icelandic were very limited, I decided to go for Norwegian. Its grammar is somewhat close to Icelandic and has a few similar sounding words, but also has a lot in common with Swedish and Danish.
After I started to learn Norwegian I found out that there are actually two Norwegian languages: Nynorsk and BokmÃ¥l. I found out that the two languages sound similar, but are written very differently. Nynorsk is written as it sounds and BokmÃ¥l is more of an average of all the dialects of Eastern Norway. Anyway, I was learning BokmÃ¥l, which was further from Icelandic then I wouldâve liked, but it did help me to understand Swedish and Danish a bit better.
While learning BokmÃ¥l, Iâve made heavy use of my knowledge of Dutch and English to figure out what certain words mean before they tell me what words actually mean. As an example, "bus driver" in Dutch is "buschauffeur", which looks like itâs French, and it is partially. "Chauffeur" is a French loanword. The BokmÃ¥l word for it is "bussjÃ¥før". Which looks very intimidating, but it sounds identical to the Dutch word. Because of the fact that some words sound very similar, I can figure out the meaning very quickly.
A very tricky grammatical thing I found in all the Scandinavian languages is that there is no word for âtheâ, as in: âI like the carâ. The Scandinavian languages solve this by adding a suffix to "car": "Jeg liker bilen". The word for "car" is "bil". The suffix can look a bit different from time to time, which still confuses me every once in a while: "Vi sitter ved bordet", âWe are sitting by the tableâ. This concept was very difficult to get used to, but now I can appreciate it because you can say a lot of things with very little words.
I originally wrote this post in July 2018, itâs now February 2019 and Iâm still learning Norwegian. Itâs still very fun to me and Iâve started to watch videos, news clips, and some other Norwegian media. A lot of it is very fast, people speak very quickly. However, I can understand a lot of the conversations that are going on in those videos and itâs very exciting!
Iâm also watching some Icelandic, Swedish, and Danish videos to see if I can understand any of it. To my surprise, I can actually understand a few Icelandic words, even though it sounds very different from Norwegian. Swedish sounds fairly similar to Norwegian, itâs a bit like Flemish is to Dutch and Austrian to German. This means I can understand basic conversations, but I get thrown off track by some of the words that are different and donât sound similar.
Danish is a whole different story, however. Danish is very easy to read because I can combine Norwegian, Dutch, English, and German to decipher it, but when people are speaking Iâm completely lost. Danish speech sounds very different from Norwegian. While learning Norwegian I got used to crisp, finished words, and then Danish sometimes just combine letters into a separate sound and cut off half of the words. So when reading things like subtitles it all makes sense, but then when I listen to the conversation it doesnât seem to line up.
Have you tried to learn a new language or do you want to? Are you bi-lingual or maybe even multi-lingual? How do you learn to use these languages? Let me know what your experiences are on Twitter! I'd love to hear from you!
]]>For the past 4 or so years, I've been working on a product with non-technical people, for non-technical people, PunchlistHero. Here are the 5 lessons I've learned from this.
To get started with any work, you need to know what to do. In order to find out what the actual problem the person is facing, you have to ask questions, a lot of them. The goal here is to find out what the actual problem is, not what the person thinks is wrong in the current situation.
For example, while working on my own product I asked questions like "What would be the simplest way for you to save tasks?" only to find out that the actual problem was that at the time, this person had to write these tasks on a piece of paper, then go to the office and insert them into a management system. You'd think he was now done with the process, but you'd be wrong. He then had to email this entire list to all the other people who had to complete these tasks.
So by asking a very general question, I got very distorted answers, because that person simply didn't know any better than to write things down multiple times. Only through asking more and more questions, like: "How do you do your job right now? Walk me through your process." I figured out what the actual problem was. I saw several problems here: you have to enter tasks multiple times, you have to copy & paste the tasks the tasks into an e-mail, there is no personalized task list for assignees, and the whole process just takes far too long.
While people are answering your questions, you need to be quiet and listen. This is not simply to be able to hear what they're saying. When people start talking, they will often reveal more information than you asked for, but they will also give you information that you may not even have thought about asking.
When you listen, keep notes. You can use these notes to ask follow-up questions. If you think that simply recording a conversation is enough, you're sadly mistaken. A recording is great if you want to preserve any and all information that's being said, but you can't use it for follow up questions. The conversation should have a natural flow. When the people you're speaking with feel at ease, you will get all the answers you need for your product, and hopefully more.
Developers have the horrible tendency to jump the gun and come up with features because the user asked for them or because they seem to solve the problem at first glance. However, people you speak with don't ask for features, they're asking for solutions to problems and are simply assuming that a specific feature will solve that problem. Sometimes it will do the job, but don't just assume that it does. You have to do a bit of research and come up with ways to solve the problem. Sometimes the first answer is wrong and you have to keep digging for better solutions.
An example of a problem I've dealt with is the fact that a person used voice input, instead of typing. This caused some issues because people would be assigned the wrong tasks. A simple solution would be to just use a dropdown with all the available people. That would be fine if you had 10 people, but in this case, it was hundreds. An auto-complete element would be fine as well, but that takes up extra space and you'd still need to use your finger to select the right person. The actual problem was that the person is walking through houses and simply doesn't have time to write down a task and then assign it to another person.
What I came up with was a combination of things. First of all, I added the auto-complete field. That way, if you do want to select the person through touch or click, you can. The second layer was a bit more involved. This was a server-side solution using Elasticsearch. When the assignee was received on the server, it would look if that specific assignee already exists, with an exact match. If not, it would try to match it through a fuzzy search in Elasticsearch with a minimum relevancy score of 90%, meaning that it was a 90% percent match or more. If this still doesn't produce an assignee, it will simply create a new one. This already solved 90% of the incorrect assignees. The other 10% could be solved through an extensive merging process, where you can assign all tasks to another person and delete the original assignee in one go.
Sometimes you think you may have the best solution to the problem, which you've used for another project before and it worked like a charm. But this may not be the best solution to the problem, or the people simply don't know why you even came up with a solution like that. This is when you make a compromise, you combine their ideas with your ideas into something you know will work, and they would actually want to use. Over time this can always be altered into something that leans more to their solution or to your own. But by compromising on this, all parties will feel like they're involved in the final result. This will cause them to take a bit of ownership for that solution and present this to others as a good idea.
Technical people love new features and new designs. They can explore an application all over again and see what's changed. Non-technical people don't like this at all in most cases. They just want to do their tasks as quickly as they can. When they're presented with a new design, their workflow will be interrupted and they won't be happy with this. Does this mean you can never redesign your application? No, of course not! You just have to do this very carefully, incrementally, and above all, slowly.
The point is that they don't have to "re-learn" your whole application, but only small parts at a time.
You want to make their experience better, not terrible. When you change features very slowly, you will make their experience better over time and you still get to redesign your application.
If you really "need" to redesign your application, consider versioning everything. With this, I mean you start to support multiple environments, multiple versions of the application. This seems like a lot of work, but it doesn't have to be. You can simply let the users know that you'll be maintaining the current application and fix any bugs that may arise, but you won't add any new features. If those users really want the new features, they would have to consider upgrading to the new environment. This is how I currently deal with a redesign for PunchlistHero. The old version is just a separate branch in the Git repository, so any updates can be done quickly and easily.
What have you learned from your experiences?
Do you have any other tips or have you experienced working with non-technical people differently? Let me know! I'd love to get in touch on Twitter and get your take on this topic!
]]>I've found that having plants in spaces you are a lot, like the office, relax me. Workdesign published a study to support this. One of them is that having plants in a workplace reduces concentration problems by 23% and fatigue by 30%. It also helps to reduce coughs, sore throats, and eye irritation by at least 24%. So in short, it is very beneficial to the well-being of employees. Extending this to my living space...it puts my mind to peace. It helps me to relax. Seeing the green leaves, fun patterns, and just something that's alive and growing in front of my eyes is very satisfying to me.
I have two areas where I keep my plants, a sunny, south facing window, and a shady room without any windows to the outside. The shady room only has internal windows and gets its' light from other rooms. It's a dark room most of the day, but to light it up, I use LED strips.
The sunny room has all my succulents, cacti, and tropical plants. These plants all need a lot of light. Some of them need a lot of humidity, while others like to be dry. I keep them all in the same space but give each of them different care. The plants that like the humidity get misted with water every day, to keep the leaves damp. The plants that like to be dry will get water, maybe once a week, some even once every two weeks.
Some of the plants in this room need bright, but indirect sunlight. So one corner of the room has partial shading because of curtains.
This is my parlour palm in the sunny room.
The shady room is home of low-light plants. Right now, there are several spider plants, a low light tolerant ball cactus, and a snake plant. These plants don't like to be in the sun at all, because it'll burn their leaves. These plants can tolerate low-light. The spider plant needs to be watered fairly frequently and can't dry out. If they dry out, their leaves will turn brown and fall off. The snake plant and the cactus, on the other hand need to dry out completely. If you keep them too wet, their roots will rot and the plant will die. So they're amazing for people who forget to water their plants because these plants need to dry out completely between watering.
As you can see in the picture above, there are two glass jars with water and propagated spider plants. I'm growing a few small cuttings in water, this way I can see the plants grow roots until they're ready for some soil. This is definitely not a requirement for propagating spider plants, but I like to be able to see the growing roots.
This is my shady office, I use the LED strips to provide the plants with some additional light.
I've recently gotten a humidifier to create a more humid environment for some of my plants. This is not a huge problem in the summer, but the winters with burning radiators make the air very dry. This can cause some problems for some plants that like to be in moist soil at all times because they'll dry out too quickly. So to combat this dry air, the humidifier will help to raise the humidity and provide these plants with a more pleasant environment. Of course, I don't have enough humidifiers to take care of all of my plants, so I also spray some of the plants with some water.
]]>When applying to development jobs, you're often asked to do a coding test to prove that you know what you're doing. I think this is terrible and here are better ways to figure out if someone is a good fit for the job, the team, and the company:
Let the developer work together with your developers in a team on real projects, just as if the developer was already hired. Coding is only 5% of the job. Communication skills, team work, and culture fit are so much more important. A person can learn how to code, but not learn how to be a team player, and a person to perfectly for in your company. Hire based on team fit, not just coding skills.
Has the applicant worked on any personal projects? Perfect! Use that to judge the programming skills. It's much better to look at code that a person enjoyed writing then code that's being forced into a limited timeframe. Look at how they comment their code, and whether they take care of something simple as a consistent coding style and formatting. A passionate and organized developer is what you want, don't judge them by the forced positivity of a coding test.
]]>During development sprints, you plan work that needs to be completed by the end of the sprint, and you try to leave some extra room for bugs that might occur and need to be fixed. When bugs occur, youâre expected to drop everything youâre working on to solve them, but this is a huge strain on developers and there are better ways to deal with this. This post describes how I plan to use time more efficiently and to keep developers focused on their task at hand during sprints.
A specified time frame to complete tasks is an excellent way to make sure tasks are being completed on time. The time pressure will promote focus rather than distractions during a workday. By time pressure I mean that there is a deadline, not an excuse to cram too much work in a given time frame. As a team, you will decide on a duration for the upcoming sprint. This is not a fixed amount of weeks, because sometimes there are events or other things that will block a full 6-week sprint. And uninterrupted sprints are exactly what weâre trying to avoid here.
The minimum amount of time for a sprint is 2 weeks and the maximum amount is 6 weeks. If you were to go longer than 6 weeks, productivity will go down, because âthere is plenty of time to do this later, Iâm making something cool right nowâ. The deadline forces you to make choices about what to make and which features have the highest priorities. This means that you can decide to take on a large and ambitious project for 6 weeks, but at the end of the 6 weeks it has to be done. This could mean that you have to decide to only build the absolute minimum viable product (MVP) and build on this later on. The results donât always have to be perfect. This is not an excuse to deploy sloppy work, but itâs to build the MVP and build that as well as possible.
Every once in a while youâll think of a new feature to build during a sprint. You will work on this during the same sprint after all other work has been completed. This should be discouraged. This is not meant to be mean or kill creativity, but to protect the overall product. Any new features will be added to the backlog and considered for the next sprint. This way, youâre not wasting your time on features that may not be as useful as you first thought. The âcool-downâ time you give to the feature, by just adding it to the backlog, helps to prioritize it. If you, and others, still think itâs a great feature after 2â6 weeks, you can go ahead and build it. If not, simply remove it from the backlog and no harm was done.
With that said, anyone is allowed to add features to the backlog. But only the development team and product owner decide which tasks will actually be added to an upcoming sprint. The development team knows best how long something will take to build, and this is why they have the last say on whatâs being built. The product owner will simply help to prioritize work during this process.
To help your team prioritize features and tasks, it could be beneficial to use âJobs to be doneâ in your workflow. A quick explanation can be found on YouTube. This will help to filter any of the new features or changes to see if they are actually beneficial to the users of your product. All tasks that add benefit to your product should get the highest priority because you will actually make the product more useful. Any other tasks will get a lower or even no priority at all.
After a few weeks of working on features and tests, there could still be several bugs. To minimize interruptions caused by these bugs in the next sprint, take one or two weeks between two sprints to hunt for bugs and resolve them. Any minor details from the past sprint can be altered during this bug sprint, but only after you canât find any more bugs. Itâs not allowed to start new features during this bug sprint. Any new features will need to be added to the backlog and can be added to the next sprint. A note about critical bugs: they always have priority over every other task.
Of course, hunting for bugs only really applies to a team that has no dedicated testers or QA team. When these people are present, they will collect all issues during the sprint and list them to be fixed during the bug sprints. A note about critical bugs: they always have priority over every other task. Assign a person within the development team to make decisions on which bug is critical and which bug has a lower priority. It doesnât matter which member of the development team takes this job, but itâs important that only 1 person does this.
There should only be minimal room for discussion because you can discuss something endlessly without making a decision. Ideally, the most experienced member of the team would take this task upon him- or herself, because he or she usually has better judgment on the subject. However, it could also be a great idea to let everyone try to take on this role, but only one person per sprint. This way, other departments will know who to talk to when something goes wrong, instead of bothering multiple people with their problems and creating irritation within the team.
The bug lead will determine if the bug is critical enough to make someone drop everything theyâre doing to resolve the problem.
To achieve a system always evolving and improving, continuous integration and continuous deployment (CI/CD) is a method to implement in your teamâs workflow. This simply means that new features and bug fixes are published as soon as theyâre done and tested. This way you can provide users of your product with quick solutions and new features.
Once youâve completed work and fully tested your implementations, you can make pull requests (or merge requests) to the appropriate branch your team has decided on. You can use any CI/CD service to help test the code in your pull request. Assign anyone from your team to look over your code, remember, youâre now preventing them from working on their own tasks, so keep these pull requests short and to the point. Once they approve your changes, you can wait for someone to merge your pull request or do it yourself if you have this ability.
To keep these releases simple to track, youâll have to use versioning and refer to these versions when talking about problems and solutions. For an excellent reference about versioning, have a look at Semantic Versioning. In short, the meaning for versions for MAJOR.MINOR.PATCH (0.0.0, 1.0.0, etc), are:
(Copied from https://semver.org/)
These versions can get tagged through Git and then pushed to any repository service.
This is a lot of information for a single post, but I think it will add benefit to the workflow within a development team. The goal is to reduce distractions, get more work done, and have a good feeling about what youâve worked on when you go home in the afternoon.
Thank you for reading this post and if you have anything that you want to see explained in a better way, let me know and Iâll do my best to facilitate this change!
]]>For the past four years Iâve been working on a side project called PunchlistHero, (the new âstableâ version will be released in two weeks) and I think itâs still very interesting and it helps me to learn new things about marketing and programming. But how do I find the time to really work on it?
Itâs all about being able to get started on it quickly. For the past few weeks I had a few allocated slots of 3 hours that I wanted to use to work on my side project. But setting up took 45 minutes, then I messed around with some new code I didnât really understand, and after everything, wrote about 10 lines of CSS. So it took me 3 hours to write 10 lines of CSS. This was very frustrating to me and I was very annoyed with myself for not having done anything in all that time.
Motivation to get anything more than small tasks done, is difficult to come by.
I decided it was time for a change in my work habit. I set up a very basic docker environment and set up a few commands to start the project and run a few commands in the development environment. I could now start to write code within 5 minutes. Problem 1 was solved. But weâre not there yet! Motivation to get anything more than small tasks done, is difficult to come by.
I solved this by making to do lists and constantly refactoring this list, making the tasks smaller and smaller. So a bigger task like: âbuild a pricing moduleâ, would turn into: âcreate a payment plan in stripeâ, âcreate a payment formâ, âcreate a payment success pageâ, âcreate a payment failed pageâ, and lastly âcreate a subscription status pageâ. The pricing module task is very large and vague and you have no clue where to start or what to do to complete the task. But by refactoring, I slowly gave this task meaning and a direction. I pointed out which parts were needed to complete the full task, a way to capture payments, to ask users for payment information, and status pages to tell the users what is going on with their payment.
Finding new tasks to work on becomes more difficult as your project becomes larger and/or more stable. This is why every once in a while Iâll think of something to add to the system, but often it gets shut down by the users, because to understand what the users really wants, you need to ask the users what they really want. Finding out what they really want and need from your project is a huge motivation boost to build THAT feature that will help to add value to your project for your users.
To understand what the users really wants, you need to ask the users what they really want.
Every time Iâm stuck, or not sure how to continue, I have a Skype conversation with somebody that works with my project every single day and ask what can be improved. This often leads to interesting new ways of doing things.
For example: in my project, PunchlistHero, there is a process which allows you to add a picture to an issue youâre assigning to a specific trade. This picture helps describing a problem a contractor finds in a home during the inspection. An issue during this process, you canât take multiple pictures, because if you do, the file input gets cleared and refilled with a new file. Itâs a multiple file input, so if you have taken the pictures already, then you can easily add them at the same time, but adding pictures, one after another doesnât work. The Skype conversation brought this problem to the light and helped me to come up with a solution that will instantly improve the usability of the system for this user.
This is how I come up with new features or improvements, to make the project more useful for its users.
So there are three ways that I make working on my side project more enjoyable: being able to work on it quickly, so I can add stuff when I have a lot of time, or very little time; finding more motivation to work on the project by making the tasks smaller and easier to do when you have very little time; and getting feedback from actual users about how theyâre using the project and what theyâd like to see improved or changed. This helps you get inspired by others using something youâve made to make their own lives better.
]]>It's finally time to make this happen! I've been running and maintaining a website for about 2.5 years now. This website is built with AngularJS (v1.6.9). This works reasonably well, but nothing compared to the newer versions of Angular. So I finally took the first steps to migrating everything to a newer version, incrementally. Here's how I did it:
Because nobody wants to reinvent the wheel, follow this "official" guide to create a new project: https://angular.io/guide/quickstart
This may seem like it's fairly easy, which it was... in the beginning, but there is more than just changing a file extension. To rename all files from ".js" to ".ts", you can do whatever you'd like. You can do this manually, with an NPM extension, or through your Terminal. I chose to use an NPM extension: Renamer. If you want to use the same, follow the next commands:
npm i -g renamer
and to actually rename the files:
renamer --find '.js' --replace '.ts' 'root/folder/of/app/**/*.js'
This will have renamed all your JavaScript files to TypeScript files. Next up, if you don't already work with ES6/ES2015, you'll want to convert your Javascript to this format. TypeScript doesn't work with non-arrow functions. Also, you'll want to start using JavaScript's "import" and "export" directives instead of "require". This will help Webpack (built-in with Angular CLI) build your application later on.
Once you've renamed and rewritten your code, you can copy this into the Angular CLI project you created earlier. Follow the following guide, from "Create an import chain" until you reach "Configure Angular CLI": Making the hybrid. At this point, you should have all your files in the TypeScript format and integrated into your "new" Angular CLI project.
At this point, you could already start to compile your app, but you'll run into errors if you've been using absolute template URLs like I was. Angular CLI uses Webpack to compile it's TypeScript files into Javascript and then into a bundle. Webpack requires you to use relative paths. So now replace all your absolute template paths with relative ones. These could be located in directives (or components), your router, or any controllers.
At this point, you will be able to fully compile your hybrid app, but only for development purposes. Once you try to compile your app with production flags:
ng build --prod
You will not be able to load the app in your browser. This is because Webpack will try to resolve any and all functions to compile them into basic Javascript. This works for Angular (v6), but not for AngularJS. To fix this, edit the following settings in your "angular.json" file:
/*This is the old situation*/
"configurations": {
"production": {
"fileReplacements": [
{
"replace": "src/environments/environment.ts",
"with": "src/environments/environment.prod.ts"
}
],
"optimization": true,
"outputHashing": "all",
"sourceMap": false,
"extractCss": true,
"namedChunks": false,
"aot": true,
"extractLicenses": true,
"vendorChunk": false,
"buildOptimizer": true,
"serviceWorker": true
}
}
/*And this is the new situation*/
"configurations": {
"production": {
"fileReplacements": [
{
"replace": "src/environments/environment.ts",
"with": "src/environments/environment.prod.ts"
}
],
"optimization": true,
"outputHashing": "all",
"sourceMap": false,
"extractCss": true,
"namedChunks": false,
"aot": false, //Updated, remove this comment if you copy/paste
"extractLicenses": true,
"vendorChunk": false,
"buildOptimizer": false, //Updated, remove if copy/paste
"serviceWorker": true
}
}
You're almost done! The last step is to include your AngularJS app in your new, shiny Angular app. You can do this by following "Bootstrap the hybrid" for the guide I've pointed you to earlier: Make the hybrid. If you want to be able to use new Angular components in your AngularJS app, follow the following steps: Angular upgrade. This guide will also show you how you can use AngularJS components in Angular, but I would recommend trying to upgrade as many of these components to Angular (v6) as you go. Theyâll have to be upgraded at some point anyway, so this is the perfect opportunity for it!
Now you can finally build your app for production purposes! Once you've completely converted everything to Angular (v6), you will be able to use AOT and Build optimizer again, making your app even more efficient. It could be I made a mistake in my own process and thatâs why AOT is currently not working, but this will need to wait on a revision.
This guide will not work for everyone, I've personally used 3 or 4 different guides and even more Google searches to get to the right place. This upgrade is not the easiest thing you'll ever do, but it will be very worth it. It will improve the stability and reliability of your app a lot. It will also solve any SEO problems you may have had with AngularJS because Angular is actually able to render on a (node) server!
If you have any questions, or better, suggestions on how I can make this process easier for you and me, please leave a comment. I'd love to help you out or learn from your experiences undertaking this hellish upgrade! If you'd like to read more about my struggles with Angular and SEO, have a look at: How to index a single page application built in AngularJS.
]]>In the last blog, I left you with some first testing results for a product page. If you haven't read it, you can do so by reading "Modernizing log: Part 2, GraphQL test results". In that post, I described what I had grouped all static resources under two resource calls, instead of nine. Well, there are exciting updates that I will share with you now!
First, let me refresh your memory about what results I've had so far. The initial situation was as follows:
As I mentioned in my last post, this page required 19 (data) resources, to be fully loaded. This was becoming a huge problem because the server would start to reject requests after viewing a few boats. This all had to do with the "X-RateLimit-Limit" header. In simple terms, the website requested too many data points in a given period of time.
When I initially implemented GraphQL, I got a significant reduction of XHR requests. I went from 19 (data) resources, to "only" 10. See the screenshot below for these requests:
That situation looks a lot cleaner already right? Well, I wasn't done yet! All I did in that particular round of improvements, was grouping static resources, to the best of my abilities. However, I figured out that it's possible to batch GraphQL queries, so you only require a single XHR request to get multiple data sources. This is where I tried to gain the most progress. I've posted a screenshot with the results of that improvement below.
There are several new things going on in this screenshot other than GraphQL. I've added cache busting for HTML templates. This adds the benefit that the clients only download HTML files when they've actually updated in a new build of the application. Additionally, the first two calls have nothing to do with the actual product page itself. They are simply optimizations to then chunking of translations for the website. Before, every user had to download all languages. Now, that's only one, unless the active language gets switched of course.
Anyway, as you can see, all static resources have been combined into a single XHR request (the third request). The application then registers a page view and loads the user notifications for the first time (mind you, this is a hard refresh, not a simple state change). Lastly, all the dynamic resources are loaded. Which are now only three, instead of six. In total, this product page now needs six XHR requests, and that is including the registering of a page view and the initial user notifications. So since starting to implement GraphQL, I've gone from 19 to 6 requests.
This page is done for now, until I find a way (and a need) to further optimize these resources. Do you have any tips on how I could further improve these requests? Let me know in the comments, I'd love to learn from you.
]]>I'm writing this style guide to make my own code more readable and maintainable. This guide attempts to cover all of the code I write and work with, so it will be expanded over time, as I find more things to document. So far, this guide covers Laravel (and PHP in general), and JavaScript.
Since I mainly work with Laravel, I'll base the PHP section of this guide on this Framework. I'll describe things like setting up routes, dealing with service providers, name spacing, unit tests, and exposing API endpoints here. I will not get into how these work, because that's beside the point of this post. All I'm doing here is describing how they should be written and formatted.
The routes files will be split according to purpose. There will be name spaces for Web and Api routing. To keep clear overview for other developers, route groups will be used. An example can be found below.
]]>We all need Cronjob for certain automation tasks, but sometimes these tasks take so long that they become a huge burden to you and your ecosystem. I had a Cronjob that took 36 hours and improved it to fully run in 2 hours. Let's get into how I did this!
The first step was to clean up my code. Some scripts had to go through three or four methods to get the required data and be formatted properly. This did actually have a valid reason (when I first wrote the scripts): I wanted to avoid repeating code. I needed the data in similar formats that the other methods provided me, so I would simply grab that data and modify it to fit my needs instead of writing a new customized (but very similar) method to get the data how I need it right away.
This worked but turned out to be very slow in the long run. I figured I'd rather have the quick and efficient code, instead of slow code that's not repeated anywhere. So I moved all code into a single method and kept reducing the code until it was clean. This already sped up the script by about 2 hours.
Another performance gain was achieved by caching as much data as I possibly could. If the data was unlikely to change throughout the life-cycle of the Cronjob and would be requested repeatedly, I added a caching layer on top of it. This didn't speed up the script as much as I thought it would, because not a lot of resources are repeated throughout the life-cycle. This did however, buy me a 30 minute boost. Not a complete waste of time, but not significant enough to really make a difference.
I achieved the biggest performance gain by moving some of the long-running parts of the script to asynchronous jobs. This includes jobs that interact with the database, image manipulation, and larger calculations. This sped up the script from about 33 hours to 2.5 hours. These processes had very little to do with the progression of the main script, so I decided to completely separate them from the main process into their own little-secluded tasks.
If there is a script you expect would take a long time, or at the very least has blocking processes, use asynchronous jobs. These jobs will be completed at their own time and will not block the progress of the main script. However, you will need to keep in mind that any data processed in these jobs are not available in your main process. If you absolutely need the data that the jobs generate for your main script, there is, unfortunately, no easy way to make this into an asynchronous job, because you simply can't expect something to be done exactly when you want it to be done. But if it's just some image manipulation or a lot of calculations that are not needed to progress the main script, make it asynchronous!
If you have any questions or remarks, please leave me a comment and I'd love to help you out. If you have any tips on how to get better results than I described here, let me know too! I'd love to learn from you!
]]>In the previous log, I mentioned that I had a product page that had too many XHR requests and was overloading the server with a high visitor count. To combat this, I came up with the solution to combine some of these XHR requests into a single call. I wrote that I was going to do this through GraphQL API endpoints. It's a few days later now and I've done exactly what I described.
First of all, I added a screenshot of the old situation. This screenshot includes the initial XHR requests and the asynchronous calls after the view has loaded.
As you can see, this page requires 19 resource requests to be fully loaded, which is ridiculous. It even has a call that just gives up and returns an error 500.
This page has two different types of resources: static resources, and dynamic resources. Most of the static resources are loaded before the view renders because they're simply there to display data on the view. The dynamic resources include pricing and data that will change as the state of the application changes. This also includes related products, as they will change with the state of the application (for this particular product).
Realistically I'd be able to merge these 19 resource requests into 2 to 4 requests, or so I thought. So I set out to merge all the static resources first. The initial server set-up took some time, but once that was done, the data structure was a breeze to set up.
The following screenshot shows the merged static resources (the first two).
Initially, I tried to merge all the static resources, but then I thought it was illogical. The second request is a resource that shows data related to the logged in user and has nothing to do with the actual product. This is why I decided against merging it with the product resource. As you can see on the screenshots, I now "only" need 10 resource request. All static resources have been combined from 9 into 2 requests.
The next step is to find a way to merge all dynamic resources into 1 or 2 requests as well. At least for the initial rendering. After the data has been loaded, any new data can be loaded through the normal API calls, because speed is no longer the main priority at that point. Since the additional requests after the first load will require user interaction to be triggered, loading times and calculations are less of a strain to the server, because it's easier than reloading all 19 resources it used to have.
If you haven't read the previous part of this log, please do so through the following link, as it will give context to this log. Modernizing log: Part 1, Conventional REST API to GraphQL
Do you have any tips on how I should approach merging the dynamic XHR requests? Let me know in the comments, I'd love to learn from you.
]]>When I looked into using Docker for my projects, I was very intimidated by "the Dockerfile". This was until I realized that I've been doing a similar thing for a year outside of Docker.
A year or so back, I wrote a full installation script that needed to be run on any new server to install any necessary software for a particular project. I thought this was the best thing in the world because with a single command I could install the entire application and all its dependencies.
So it still confuses me that I thought Docker was difficult to understand. It's exactly the same concept as the full installation script, but instead of installing any software on the Host OS, you install it in a contained environment. So when I figured this out, I built a single Dockerfile for my projects, containing everything I needed to get started. Then I thought to myself, "Docker is used as a container service, why do I use a single large container?". It felt like I wasn't using the software as intended. This is when I came across docker-compose.
Docker compose manages multiple different containers for you through a docker-compose.yml file. Now I finally felt like I was taking full advantage of the different containers.
An example of this can be found here:
version: "2"
services:
nginx:
build:
context: ./nginx
ports:
- "8080:80"
volumes:
- ../:/var/app
fpm:
build:
context: ./fpm
volumes:
- ../:/var/app
expose:
- "9000"
redis:
image: redis
expose:
- "6379"
solr:
image: solr:7.2.1-alpine
expose:
- "6379"
volumes:
- ./solr/search_core:/opt/solr/server/solr/search_core
This file seems a bit strange if you've never worked with Docker or docker-compose before, but it's actually really simple. The version simply marks which version of Docker you'd like to use. The services block is where it get's interesting because this is where you define your different containers. As you can see, I have four different containers.
The first service is nginx, because you'll need some kind of web server, and I like Nginx better than Apache. Nginx requires you to redirect any content to PHP, in contrast to Apache. This is why I also have a PHP container defined. The "context" argument here simply means that any configuration I'd like to do is located in a Dockerfile in the given location. In this Dockerfile I have defined what software the container should run.
This is an example of the Dockerfile for the Nginx service:
FROM nginx
ADD ./default.conf /etc/nginx/conf.d/
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
CMD service nginx start
All this Dockerfile does it customize the default Nginx web server configuration. Then it tells Nginx to not run in daemon mode (because the docker image will stop working right away). The CMD directive simply starts the Nginx service. The configuration that's being applied to the Nginx container can be found here:
server {
listen 80 default_server;
root /var/app/public;
index index.php index.html;
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_disable "msie6";
gzip_comp_level 6;
gzip_buffers 4 4k;
gzip_types text/css application/javascript text/javascript text/plain text/xml application/json application/x-font-opentype application/x-font-truetype application/x-font-ttf application/xml font/eot font/opentype font/otf image/svg+xml;
gzip_min_length 1000;
rewrite_log on;
# serve static files directly
location ~* \.(jpg|jpeg|gif|css|png|js|ico|html)$ {
access_log off;
expires max;
log_not_found off;
}
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~* \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass fpm:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
location ~ /\.ht {
deny all;
}
}
This configuration makes sure that the web server serves the PHP files correctly and adds some caching headers to static files. All requests are forwarded to the PHP container through "fastcgi_pass fpm:9000;". Obviously, this Nginx installation is not set up for SSL, but for development purposes, this has not been deemed necessary (yet).
The third service is Redis, but as you can see, I defined an "image" argument here. This means that I'm using a pre-built Docker image and don't wish to make any adjustments (in this case, at least). I simply expose port 6379 to be able to monitor the service from my Host OS. This is not recommended in a production environment, because the outside would will be able to access it now. Docker provides internal pointers to this port, so you'll be able to use it in the other containers, without exposing it to the outside.
The fourth service is again a pre-built Docker image, but this time I'm attaching a host volume, through "volumes". What this means is that I'm allowing the Docker container to interact with a folder or folders on my Host OS. This way I'm able to use information from my own hard drive inside the container.
Docker and docker-compose make it very simple to work together with colleagues on the same code because all the code runs in the exact same environment. It doesn't matter if they use Mac OSX, Windows or Linux, the application environment will always be identical.
So if you've not tried Docker yet or you're intimidated by it, give it a try and don't give up. When you get it to work, it'll be a wonderfully simple experience to add functionality to your application. If you have any tips on how I can improve any of my examples here, please let me know! I love to learn more from you!
]]>I've been working full-time, right out of college, for two years now. Initially, I thought this would limit my time for improving my programming skills. It most certainly did not, and here's what I've learned since then.
First of all, I switched to Linux. I started out programming on a laptop with Windows installed on it. I was very much against using Mac OSX as my main operating system (and I still am), so I never bought a Macbook. But Windows is simply horrific to work with if you're a programmer. This is why I switched to a Linux based operating system, Ubuntu in my case. This is one of the best things I could've done. Using Ubuntu as my primary system made me learn to use the terminal for most of my daily tasks. This, in turn, taught me valuable lessons about installing software on Ubuntu based servers.
Second of all, I learned to use other languages than PHP and JavaScript. I was fairly familiar with JavaScript, so my knowledge was mainly PHP and JavaScript. To me, this is one of the best basis you can have to start building websites. Of course, you can change PHP to Python or Ruby, but any of those combinations is a great base to build from. From JavaScript and PHP, I went to learn bits and pieces of Python and Java. I learned some basic Python principles by interacting with the Raspberry Pi and I learned Java through Solr. Solr is a Java-based search engine, much like Elastic Search.
Third of all, I started using Docker. Having switched to Ubuntu as my primary system, I was very familiar with the Unix environment. This made the switch to Docker 10 times easier. Writing Dockerfiles was a breeze once I understood the different types of commands, like RUN, CMD, and ADD. This makes developing with other people much easier, but it also makes deploying your application very easy.
I know I'll continue to learn new things as I keep working and I'm excited to see what the future will bring.
What are some of the things you've learned during your job that you thought you'd never learn or need?
]]>Iâve been working on a Laravel and AngularJS application for two years now. Itâs slowly becoming more and more complex and itâs starting to become very difficult to manage. Every single Angular view needs at least 5 different resources to fully work and this is becoming a problem for our servers with a high visitor count.
Lately, Iâve been reading about GraphQL and how you can perfectly query all the required data you need in a single HTTP request. This would solve a lot of problems Iâm currently experiencing with PHP-FPM.
So right now Iâll research and set up a testing page with a single HTTP request to GraphQL API endpoints. Iâm going to see if this reduced the high server load Iâm currently experiencing. Along with the server load, Iâm going to have to measure the loading times for this single request. The current solution for a product page makes 20 different resource requests, but these requests are tiny, so the page loads quickly. However, with a high visitor load, this completely overloads PHP-FPM.
So there are two things Iâm going to have to test for now: server load (preferably seeing a huge reduction), and response times (preferably low enough to facilitate a quick page load).
In the next post, Iâll document my findings.
]]>Some people are night owls and others flourish in the morning, but all people deal with waking up. Some people are just better at it than others. But even night owls can find ways to wake up more easily in the morning. So I'll be giving you 7 simple tips to help you wake up more easily. You can take every single one of these tips as a small activity or improvement, they don't have to be done together to wake up more easily, but it does help.
An alarm that scares you awake in the morning seems like a great idea because you'll be awake instantly. However, you'll be irritated. Waking up more easily and naturally is all about feeling good when you wake up. There are a few ways to accomplish this, for example, a way to slowly wake you up over a period of time. A Philips Hue light wakes you up over a period of 30 or so minutes through a steadily increase in light intensity and a sound that progressively gets louder. This makes it feel like the sun is rising and it's time to get up and be active. Another way to do this is to use an app called Sleep Cycle, which I mentioned in my previous post about "How to improve your working day". Sleep cycle uses your phone's sensors to measure when you're fast asleep or when you're almost awake. It will wake you up when you're almost awake and this will cause you to feel less tired than when you're rudely awakened by a loud noise.
When you snooze and go back to sleep, your sleep cycles will start again and won't have enough time to finish the cycle. This means that you'll be woken up by your alarm in the beginning stages of being deep asleep and you'll end up feeling tired and irritated. So when your alarm goes off, get out of bed. Literally, throw your blankets off you to avoid falling back to sleep. Get out of bed and start to slowly get active.
There are two different kinds of "clocks", an external clock (24 hours per day) and an internal clock (your own day/night cycle). When you expose yourself to natural sunlight, you can effectively influence your internal clock. This means that you'll make your body feel it's time to get up, it'll help you energize yourself. The external clock is the 24 hour day cycle and some people have a shorter or longer internal cycle, compared to the 24 hours per day cycle, according to Chloe Fung Choi Yi. By triggering your body to wake up through light, you can try to synchronize your internal clock with the external clock. When your internal and external clocks are synchronized, it'll feel more natural to wake up at a certain time and this will make it easier to get out of bed.
If you've read my previous two posts you'll know I like to recommend drinking water often. Drinking water is also a good way to help yourself wake up in the morning. It will kickstart your digestive system and will make you feel more hungry and more motivated to get up and have breakfast. According to some people (google it), drinking water helps you remember dreams more easily, so there you go, a fun little extra benefit if it's true.
Humans have a natural tendency to see patterns. This used to be a survival instinct, for example, running away from lions without having to think about it. This means that our bodies and minds feel happier when we don't break patterns. Sleeping on a very irregular schedule is breaking a pattern. Your brain will have difficulty coping with this habit over a longer period of time. Sleeping on a regular schedule relaxes your mind and will, in turn, help you to wake up more easily and feel more rested.
Sleep in a dark and quiet area without too many distractions, like screens. Artificial light, like lamps and screens, disrupts your sleep cycle. This means that you'll require more or less light than you normally would. This, in turn, means that waking up will be more difficult. So prepare yourself to go to sleep by turning off the lights and screens, except a candle perhaps. Candles don't have the same light intensity as other artificial light, so they will affect you less. I've been trying an alternative method, by using an app called Twilight. It dims your screen and makes it red after sunset. This red-ish light will help your eyes to prepare to go to sleep. This may be something you could try yourself too.
This goes along the same lines as getting a regular sleeping pattern. Taking naps is breaking your natural sleeping pattern. It may give you a short energy boost, but it will make it more difficult to sleep at night. This is why I recommend not taking any naps during the day. If you're really tired, just go to bed a little bit earlier, but not too early compared to normal.
If you found these tips useful, consider sharing this post with your friends to help them wake up more easily as well. I know they'll appreciate feeling more rested just as much as you. If you have any tips that I may have missed, please let me know! You can contact me on Twitter (@RJElsinga) and Instagram (@roelof1001).
]]>Everyone has a bad day every once in a while, but there are ways to reduce this amount. I put together a list of a few simple steps you can take to improve your days little by little.
This doesn't necessarily mean sleep for a long time. It just means, sleep so you feel fully rested. There are a few ways to make this easier for yourself, but I've found that the easiest way is to use an app called Sleep Cycle. It tracks your sleep stages and can wake you up when you're in a less deep sleep, causing you to wake up feeling rested. A normal alarm could wake you up while you're in a deep sleep, this would cause you to wake up feeling tired and irritated.
Alongside waking up feeling rested it's also a good idea to wake up a bit earlier. You will have enough time to do everything at a slower pace. This way you'll feel less stressed and you can wake up a bit slower. I've been doing this for a while and it's very relaxing to be able to read something while you eat your breakfast and not having to hurry to go to work.
If you're a person who hates waking up and is very slow and tired in the morning, try a 5-minute workout. You can do some push ups and some crunches, anything to get your body warm and active. This will make you feel awake. Are you still a little tired? Try a shower right after your workout, this will definitely wake you up quickly.
If you've read my previous blog about how to stay healthy as a developer, you'll know I love suggesting to drink more water. So I will do that again. Drinking water keeps your digestive system clean and will over time show other improvements, like a clearer skin. Being hydrated helps you concentrate better and get more work done. This is why you'll most likely feel better about your day. You were just more productive!
I hate feeling bloated after a fast food meal, so I just went to get it less and less. This has helped me feel better after healthier meals and I feel ready to get to the next task of the day right after it. Healthy food doesn't have to be expensive like everyone claims it is. If you're smart about your meal prepping, it can even save you some money and you're being rewarded with feeling active and fresh after your meals. So I suggest you start looking for meal prepped lunches to bring to work and see what you think about it. If you like it, perfect, keep going. If you don't like it, well that's too bad, maybe you'll find another way!
Breakfast, the most important meal of the day. Seriously, eat breakfast. Feeling hungry is horrible and it even makes you feel tired. After a night's sleep, your body is empty, there is no more food to digest. This means that your body is running out of fuel quickly. When this happens, you get tired. And getting tired at work is just not good for your productivity. So please, eat a good and healthy breakfast. If you wake up early enough you'll have plenty of time for it anyway. A good base for your breakfast is some grains, maybe some oatmeal, along with a piece of fruit, like an apple. This will kickstart your body to be ready for a new day.
When you do get to work and you're fully awake and feeling full from your breakfast, make a to-do list. This list will contain everything you do that day, in chronological order. This way you can start at the top of your list and work your way down. You'll feel great crossing off all the tasks you had planned. It'll feel amazing having done all or most of the tasks at the end of the day and you'll have visualized all the hard work you've done.
Plan something for to do in the evening, you'll be looking forward to it all day. This will help you get through less good parts of your day and will make the good parts even better. This fun plan could be as simple as to walk outside and take pictures of the sunset, or maybe do a picnic in your garden, or read a good book by a campfire. It doesn't need to be a big thing, but it needs to be something you really enjoy doing.
Sometimes you just run out of things to do in the evening. When that happens it's time to look for a new thing to do. This can be done during the evening, or in your weekend. Do something you've never done before, or something you haven't done in a long time. Look for a fun restaurant, maybe do some painting, or build model planes, like me. This will broaden your activities list and you'll be able to build onto it. So if you're building model planes, you could move onto building cars or maybe making model airports or something along those lines. You can expand and experience new things and this can be very fun.
Every once in a while, our brains are just too stressed to be able to do anything or relax. This is a time where you need to plan breaks. During these times you are not allowed to think about the things you're stressing about. This can be very tough, but trust me, you'll appreciate it. After this downtime, you'll most likely think about your problem in a different way than you did before. This could benefit you with your problem.
You make you happy, other's don't. This one doesn't have as much of a direct impact as the others, but it will help you in the long run. When you stop caring about what other, non-relevant people think about you or anything you do, you'll feel a weight falling from your shoulders. You're living your life, not theirs. And they don't live your life. This could be tough to let go off, because it's something that's rooted deep in our culture these days, but once you do, you'll feel free. You won't have to please anyone, but you and the people you choose to listen to. It's a great feeling.
I hope you've enjoyed this blog, share any of your own suggestions with me on Twitter.
]]>Us web designers and developers sit down all day long while we're at work. We don't move as much as we probably should. So many of us are probably not as healthy as we could and should be. I'll be giving you a few tips to get healthier and feel better during the day and sleep better at night.
This could be a no-brainer, but we're trying to maximize your body movement during the day. If your work is too far away to walk or take your bike and you have to take the bus, get out a stop earlier and walk the last part. When you're taking the car to work, try parking at a parking space further away from work and walk the last part. And if you're working in an office building and have to take the elevator, take the stairs instead, or don't take the elevator all the way and climb the stairs the last part. Whatever your travel situation, try to walk as much as you can. This can give you 1000 to 2000 extra steps and that's an amazing start of the day.
Everyone knows that coffee is dehydrating, but did you know it can also be bad for your physical health if you drink too much of it? Try to switch some of your cups of coffee to tea or water throughout the day. I personally reduced my coffee intake from 5 to 6 cups per day to 2 to 3 and it doesn't feel any different. A good way to start to replace some of your coffee with water is to have a bottle or cup of water on your desk at all times. When the cup is right there, you're more likely to drink it and most of the time you don't even realize it. If you need a way to track your water intake, I can highly recommend Hydro coach. I've been using the app for almost two years and it reminds me to drink when I need to.
Taking a stroll every hour or two is not just good for your body movement, it can also help you think. Taking a stroll every so often can help you think about a problem in a different way. Maybe you see something or someone that can help you solve the problem. It's also a great time to fill up your cup of water after you drank the last one.
I hated making lunches for the day in the morning, it took up too much of my time. When you start meal prepping you can make all your lunches during the weekend and just grab a box during the week. This is not only very convenient, but it's also healthier and cheaper than buying a lunch every day. You can decide what you put in your lunches, you can make it as healthy as you want, with some chicken and rice, or less healthy with some pasta. Either way, it's more convenient to bring a ready-made box every day.
Getting a step counter is a perfect way to keep track of your progress and to challenge yourself. In the beginning, I started out with 8000 steps per day, just to see if I could make that by sitting down all day. It turned out that I could make that and I started to challenge myself. I set my goal at 10000 steps and kept going up until I had trouble reaching my goal. I'm currently at 12000 steps per day and I feel better about myself. I sleep better during the night because I exercise a lot more during the day and therefore I feel rested during the day. That also helps with the second step, reducing your coffee intake.
Becoming healthy alone can be difficult. Nobody will stop you from being unhealthy at times and you can easily slip back into your old, bad habits. You'll be more likely to succeed when you have other people to help you stay motivated. This is why Sander and I are coming up with a concept to help developers to keep each other healthy.
What have you tried to get healthy as a developer or office worker? Share your experiences with me on Twitter!
]]>You've all seen programming books on the internet or in bookstores. But most of us know that those books are usually not relevant anymore, most of them are outdated. So should you buy them? I think you should, but there are a few conditions.
The maturity of the language is incredibly important for the relevancy of the book. I'll use two examples here. I bought a book for AngularJS to learn the languages. At the time it was already a few years old, so the book had a few gone through some revisions and was more in line with how AngularJS actually worked. Fast forward two years, I bought an Angular 2 book. Angular 2 was still in beta at this time and was constantly changing. I couldn't use the book at all because it was written before Angular CLI was in existence, which made the book useless. The only thing I could use it for was figuring out what the concept of the language was, but actual coding examples were irrelevant.
Books about data analysis with Apache Spark is really fun, but you won't be able to use them if you have no clue how to set up a server or work with databases. You should get books that help you to improve your skills, not books that are too complicated for your own skill level. You'll end up feeling dumb and unmotivated. You'll get to that level through practice and more practice. Start at your own level, or ideally, a little bit above your level to improve your skills. If you're just starting out, get very general knowledge books. They'll help you to start understanding how a language or technique work and it'll help you form a basis on which you can build skills. If you get very specific books right from the start, something like "Machine learning with Python", instead of starting with "Python: The beginner's guide", you will not understand why certain parts of the program behave the way they do.
I'm a PHP and Javascript programmer, this is why learning Python from the ground up, doesn't really make sense. It won't help me do my job better. However, knowing something from another language is definitely not a bad thing. Maybe you need to make a new application and your current programming language is too limiting to be able to accomplish this. Well, then you have a great reason to use another language that's much better up to the task. This project will help you develop new skills and build a better application than you'd be able to make prior to learning this new language. What I'm saying is, if you're a Javascript developer, don't start to learn something like C++. This won't have an immediate benefit for you and it'll most likely cost you a lot of time. My suggestion would be to slowly make your way towards the language, don't sprint there.
Books can be an amazing way to learn a new programming language, but keep in mind that the new language should be something that's achievable for you. Make the experience eye-opening and challenging, but don't make it an impossible task. When you challenge yourself you'll pick up the new language very quickly. If you make it impossible, you'll never touch the book again. Make sure the language you do decide to buy a book for is something that you'll end up using a lot of the time, otherwise you'll forget all about it and you will have wasted your time.
Have you found amazing programming books that have helped you to learn a new language? Share them with me on Twitter!
]]>People keep asking about which framework to use (Angular, React, or VueJS), and which one is better. I can understand they want to know which one to use for projects, but it's a silly question. It's comparing apples to oranges. I'll explain why you should use one over the other for a specific situation, but it's only my humble opinion.
Before I start with Angular, I will have to clarify that I'm biased towards it.
I have been using AngularJS for 3 years and I've put a considerable amount of
time in Angular 4 (just Angular from now on). For now, we'll stick to Angular
though because AngularJS is not really updated anymore and it's becoming outdated.
So when should you use Angular? Well, when you're building a medium to a large
application. The set-up time is longer than both React and VueJS,
but it's also the full package. Where React and VueJS are only used as the user
interface, Angular is the user interface and it has other things "included".
I say included here, but they have been removed from the Angular core and included
in separate modules since Angular version 4 launched. Angular uses TypeScript
instead of Javascript, which is a big turn off for a lot of people,
but I've come to really enjoy it. I mentioned that Angular is mainly good for
medium or large applications because it's not very easily used as a "drop-in"
framework. You either make all pages through Angular or you make none.
As I said earlier, React only deals with the user interface. If you want things like a router or any way to interact with the server, you'll need to find modules yourself and integrate it with React. Many people like this, as it gives you all the freedom to choose whichever module you please. React can be used as a drop-in framework, so you can make parts of the page with React instead of having to make the entire page with a single framework, like Angular. Since React can easily be used as a drop-in framework but also includes a router module, it can be used for small to large applications. The set-up time is minimal, but it does have a fair learning curve. Where Angular uses TypeScript, React uses JSX. It means that all the logic and the templates can be built in a single file.
I'd like to call VueJS "All the right things of AngularJS". AngularJS will always have a special place in my heart and that's why I'm liking what VueJS is doing. VueJS is also a framework that only deals with the user interface, just like React. It does have a router and modules to deal with server interaction available, so it's fairly similar to React in that way. It's also a drop-in framework, which means you can use VueJS for small applications. I wouldn't recommend using it for medium or larger applications just yet. It's a new framework and the file organization needs some work because it can get messy. That's why I recommend using it for smaller applications. You can set it up in a breeze, so you can get started quickly. VueJS actually uses plain Javascript, which I really appreciate. There isn't really anything new to learn except some of the directives that AngularJS and Angular have as well.
I hope that clears up the battle of the apples and oranges a little bit. The frameworks are completely different and don't even have the same use case. You can use Angular for large applications and has all the most-used modules built-in. React and VueJS are both for the user interface alone and they don't include any of the modules that deal with server interaction. This means the developer is free to choose any modules to fill these gaps. React and VueJS are comparable because they are both only for the user interface, but they still don't serve the same use case. React is for small to large applications because the file organization is simpler than VueJS. VueJS is for small applications only for now, simply because it hasn't had the time to mature just yet. You can use any of these frameworks to make single page applications, or React and VueJS for some dynamic elements.
If you like to talk about this subject further, follow me on twitter @RJElsinga or Instagram @roelof1001.
]]>Sharing knowledge about a topic you're passionate about is one of the most fun things you can do. You can share your interests by talking to colleagues, go to meet-ups, or by simply listening to podcasts. Which is what this post happens to be about, what a coincidence. Sander and I (Roelof) both listen to podcasts, but we listen to different types of podcasts because we have different interests and skill sets. I mainly listen to developers podcasts, either front-end development, or PHP or just Javascript specifically, they all interest me very much. Sander listens to branding and design podcasts. Below we will both describe some podcasts we listen to and also explain why we choose to listen to it.
Front-end happy hour is a podcast with a panel coming from different companies, ranging from Netflix to LinkedIn, to Atlassian and Evernote. The podcast is about anything related to front-end development. So there is a lot of Javascript, but also HTML5, CSS, and even Swift. I like hearing from people working at big innovative companies what their take on different situations is. Especially situations I have been in myself before. Not only is it great to hear what alternatives they have used opposed to my own solutions, but I can also learn to use different tools or approaches to deal with a situation. They upload a new podcast every two weeks, so be sure to check them out!
NodeUp is a podcast that's all about NodeJS. Recently I've been more and more into using NodeJS into my projects. A few years ago I've made a simple application in NodeJS and AngularJS just to try it out. I stuck with AngularJS and sort of put NodeJS to the side. Now I'm trying to get back into it and listening to these podcasts have helped me to understand certain topics and concepts better. Coming from PHP on the server it's hard for me to imagine how Javascript on the server can be secure, so this podcast has helped me understand better how this works and what you can do to secure your applications better.
ShopTalk is a podcast about web design. I've been working on designs and front-end development a lot in the past few weeks, so listening to people talk about it helps me to find new ways to solve some problems I could be having. The podcast is similar to Front-end Happy Hour with the range of topics, but the personalities are different. One of the hosts is the founder of CSS-Tricks, a website I use fairly often these days. So it's exciting to hear what he and his co-founder have to say about web design.
Travis and Carlos will help you to develop yourself as a person. They are two awesome people, Travis is also known for his channel Devtips (which at this point as the blog is written is on a break due to a burnout). The podcasts or focused on you as a person, design, branding, and front-end. They will discuss many things and sometimes invite other people to join their podcast. They upload on a regular basis which is fun!
Sander's favorite company is Basic Agency from San Diego, they have really awesome and well-known clients. Why should you listen to their podcasts? They help the design community all around the world to become better. They invest a decent amount of time on this. The podcast is one of these examples. Most podcasts are general and all say the same, while the podcasts of Basic Agency go in depth. It's fun to listen to these people.
Do you want to listen to the big names? Then this is podcast is something for you. I'm checking Designernews every day to stay up-to-date as a web designer and front-end developer. I've listened to all their podcasts. There are really big names in these podcasts, people that made it! Listen to their experiences and you might learn something that will help you grown or understand how things work within the web design world.
If you want to talk about your favorite podcasts with us, or give us some suggestions for podcasts, get in touch with us! You can follow me on Twitter @RJElsinga and on Instagram, be sure to check out Sander as well on Instagram! If you're interested in more of our posts, maybe to try to learn how to make more time for side projects!
]]>Any developer will have a set of developer tools they swear by. A set of tools that does everything we need it to do. People can say that tools are interchangeable, and to an extent they certainly are. However, the set of tools a developer uses, often dictate the workflow. With that said, I'd like to move to the part where I tell you which tools I use on a daily basis.
One of my main programming tools is an IDE called PHPStorm, which I think almost all programmers have at least heard of. The editor comes with built-in terminals, which I find a really useful feature. I usually use 3 or 4 terminals at the same time and the editor makes it very easy to manage all of them. Another useful feature it has that I use a lot is the search functionalities. You can use a few keywords to search for the string in your whole project and it makes developing easier and less tedious.
If for any reason, I ever need to change any live code, I use the command-line editor Nano or Atom combined with Filezilla. Luckily I don't resort to going down this route too often, because any mistakes will immediately be reflected in production. Normally I change all the things I need locally and get it into production through Git and Github, which are two of the other tools that I use on a daily basis. Along with Git, there is, of course, NPM and Composer to get all the required packages in my projects. If you're not using package managers for your projects in 2017, you should check it out. It makes keeping your applications up-to-date a breeze. It also means you can take advantage of thousands of open source packages that have been built by other people.
Testing is a very important part of the build process. Luckily Laravel, the PHP framework I use for most of my projects, has PHPUnit support built-in. This means that writing tests is very easy. With a few simple lines of code, you will always know if the methods you write act as you intended them to act. This is a very good process to run before you're ready to publish your code, just to make sure what you wrote actually works.
Sometimes you just really need a browser to test your application. For example while building SPA's I use the Chrome Developer tools almost 100% of the time. There are two browsers I test in and see if everything goes according to plan. The first is, of course, Google Chrome and the second is Firefox with the Firebug plugin installed. This comes with a console and a network tab to see if there are any logged errors and to see what data your browser actually loads or receives from the server. This is very useful for debugging and making sure the browser receives the data you need it to receive.
So there are a few tools I use on a day to day basis to make sure the development process goes according to plan and the code you want to be published gets published in an orderly fashion. Because at the end of the day, you want local code running in a production environment. There is not just one way to get there, everyone will have their own way.
]]>There is nothing more fun than working with API's during my workday. It's programming like any other day, but it's also so much more! It's connecting other services with your own, using them to enhance your application and making it much richer with functionalities. You're essentially using other fine-tuned services to benefit your own service and sometimes to offload some aspects of your application, like social login buttons through Google, Facebook, and Github. I mentioned in an earlier post "What I've learned building Single page applications" a little bit about how I've been using API calls during my day. I'd like to clarify one thing before we dive into my fascinations with API's. I see an API call as any form of data transfer between two different applications, so it's not limited to HTTP.
Currently, I'm working on a project that involves 4 major connections so far, and has only just started. My application connects to Sendgrid for sending of all my system emails, Zapier for offloading data to other services (there are literally 750 applications connected to it, it's wonderful), GraphCMS for the content management of the application, and Tubbber for all search and database related functionalities. So what does my application actually do by itself? Not all that much, except using all the different API's to give different kinds of data context.
This type of application architecture has become more popular in the last few years. A few years ago, all aspects of your application or platform were combined in one big package, applications nowadays are more broken up, they're more modular. This means that each individual component has a very specific task, one task it can do really well. You'll notice that testing these functionalities is a lot easier as well, which is another added benefit.
This is a huge benefit to larger corporate systems because when one of these services breaks, your applications can still partially run normally. If you cache all data going to and from all your API calls, your users may not even experience any problems at all when one of the components of your architecture goes down. Not only does this architecture spread the risks of losing different components, but it also spreads hardware usage, meaning you can downgrade your main server to a smaller size since it won't need to do everything in one place anymore. If you're lucky, you can use all your connecting components for free and it just saves you money.
An aspect of this whole architecture that fascinates me a lot is the fact that all these applications can work together flawlessly. The applications could be using completely different programming languages, yet they work together. As long as they share a common data structure or are at least able to parse the same data (JSON, XML), they will be compatible. I can provide one great example of this is, because I built a search engine for my work. This search engine utilizes Solr, which is built on top of Java. I built the main system with PHP, but through JSON exchanges I can get information easily.
I like API's, because, with only a few simple lines of code, you can trigger a huge calculation elsewhere. This event will then return get the exact data you requested, the only thing you have to do is ask. You can also use an array of API's to improve all the connected applications, not just your own application. For example, you can grab data from Facebook and use it to enrich your own data. You can then use this data to enrich data in a program like Google tag manager or Salesforce.
API's are amazing to me, so I want to share some platforms to start with. Have a look at:
If you like to talk about this subject further, follow me on twitter @RJElsinga or Instagram @roelof1001.
]]>Web applications are great. They're fast, they can be used on all platforms and often feel like they're a real native application with accessibility. But then, your internet stops working and you only had to check that little note you made earlier. Too bad, you can't connect to the application and you can't see that note you made earlier bummer! Service workers to the rescue!
To really make web applications competitive against native applications, you'll need to simulate or even enhance the expected behaviour of such apps. This means that the app should feel quick and responsive, you should be able to access it whenever and wherever you want and it should benefit you when you need it. So let's split this expected behaviour into three sections: quick and responsive, accessible whenever and wherever, and personal benefit.
One aspect of a native application over a web application is usually that the native application feels quicker. You don't have to wait for something to appear on your screen, whereas for web applications you often have to wait for data to show content on your screen. This is a deal breaker for a lot of people. A true app should be quick. One solution for this is browser caching through Nginx or Apache through Cache-Control and Expire in your response headers. The browser will attempt to cache the requested resources in the browser, thus making the second load of your application nearly instantaneous. This is an amazing first step because your application instantly feels a lot faster. However, the browser will still need to request data from the server to even receive response headers, which isn't possible when you don't have any internet. This is where service workers play a huge role.
I mentioned in the previous paragraph that browser caching is a great way to reduce bootstrapping time, but it won't work if you're not connected to the internet. Service workers are the solution here. A service worker essentially is a middle man, built into the browser. This middle man can intercept any request made from the browser to the server and customize its behaviour. This sounds a little vague, but hang in there. You have to imagine that this middle man is receiving a request from you (through the browser). The worker will then look in its memory to see if you've requested this resources before. This resource can be anything from a JS file to a CSS file, HTML, image, etc. If the worker does find the resource in its memory, it will return this. Did you see what just happened? The request never touched the server. It requested something and the service worker returned a cached version of the requested resource. You can create a web application like this that is available, even when you're not connected to the internet.
Offline accessibility is only one of the benefits of service workers. Imagine you're in a remote location and you're connected to the internet, but your connection is incredibly slow. Normally when you're offline the website will fail to load straight off the bat, but not this time. It will attempt to download all the resources like it normally would, with a slow connection. This can cause the website to load in 3 minutes instead of 3 seconds, which is terrible user experience. Tadaa! Another task for the service worker. This little worker will recognize the situation and will return the cached version instead of attempting to request the resource from the server. The load time is once again three seconds! Service worker out!
That offline web application is great and everything, but if you still need the internet to save data, your web application will still fail its purpose. It'll look like it's working, but in reality, it doesn't do anything else besides being pretty and fast. The solution here is maybe not the most obvious to some of you, but you can make use of a fantastic feature of HTML5 called IndexedDB. This is an in-browser database that can contain JSON objects in a simple key-value pair database. When your app is unable to save any data to your actual database, it can use IndexedDB as an offline fallback and synchronize with your server at a later point in time when you do have an internet connection.
What does this mean for your app? Well it means that it looks pretty, it's fast, and it's actually fully functional. This will get your web application to be more and more competitive with native applications. First of all, your application will behave like a normal native application, no matter what the situation might be. Second of all, don't tell everyone, but it's much cheaper and easier to build web applications than it is to build native applications. That's what I call a win-win situation. So to round up: use service workers to make your web application to behave more like a native application in less than optimal situations.
]]>Single Page Applications (SPA's) are amazing to build and work with, but there are a lot of disadvantages as well. This post describes some of the things that I have learned while building SPA's. It also contains tips to help developers building or thinking about building SPA's.
So first up is the challenge of having proper titles, meta tags, and general SEO requirements. In some Javascript frameworks (like ReactJS and Angular) this problem has already been solved. Some older generations of Javascript frameworks like AngularJS (version 1.x), this problem still persists. When you don't do anything to properly generate SEO tags/titles/texts, Google and Facebook will simply not find anything for your website apart from the URL.
A very simple, but in some situations pretty tricky solution would be to use prerender.io. This service uses PhantomJS to render an entire webpage, to show titles, tags, and texts. This way, when Google or Facebook crawl your website, they will see all the proper information they need for search results or Facebook's open graph cards. At my job we use this service, but not without any problems. First of all, you need to make sure you're using HTML5 polyfills for everything. This is because we made use of Javascripts Promises, but PhantomJS didn't recognize what this meant, so it simply didn't render our pages, causing us to pull out our hair over it. When we discovered Promises were the problem, we switched to using Angular's $q promise instead of solving the problem. So if SEO is very important to you and your application, make sure the framework you choose has built-in functionality to render your pages properly for Facebook, Google, etc. A great starting point would be to use Angular2 or ReactJS.
Another thing I have learned is that file structures are incredibly important. Consistency in file and code placement is important. What does this mean? Well, this means that code and modules need to be separated by function, not by type. What this means is that you shouldn't put all controllers in one folder, all services in another folder and all directives in yet another. What I'm saying is that you should put all code, templates, etc. belonging to specific functionality in a separate folder. This may seem tough to start out with, and for small applications, this is not necessary, but for large applications, this makes your life so much easier. The number of times it took me so long I just gave up and did a full-on text search over all files to find the one I needed is too high. If I had started to structure my filesystem like this from the beginning I could simply find the folder that belonged to that specific function and have all the code I needed right there. It's a real time-saver.
The last thing I have learned while building SPA's is that the API structure in your back-end is incredibly important. Starting out I wrote a single API call for each page, collecting a lot of data in one server response. This is slow and is the wrong way to go. The asynchronous nature of SPA's makes it easier to use several smaller API calls to get the data you need. While you have one request in a queue, other processes can still take place. This helps me to load screens and it's data much quicker than waiting for larger requests. When the application only loads one massive response, the pages need to wait before they're ready to go. So when you structure the API endpoints in the backend, make sure to keep the responses small. This will help you break up the loading times so users using the application will have a smoother experience.
]]>Working remotely, we've all heard of it before. Simply said, you're working on something, but you're not actually in the office while you do it. In some occasions this means you work from home for the day. In others maybe you're a freelancer and work for several different companies from your own office instead of theirs. Or maybe you're taking a trip but you still want to work. There are many different ways to work abroad. Let me tell you about my experiences.
I work for a company, but my girlfriend lives in the United States. You might think, well tough luck, I guess you'll only see her during the holidays. Wrong. I asked my boss and colleagues if they would mind if I worked from another country. They were fine with it, it wasn't the first time. The first time was only 2 weeks, it wasn't a long time, but it was an experiment. An experiment whether I could work remotely or not. In my opinion, it worked really well. I got up at normal times and worked from 9 till 5, just 6 hours later than normal due to time zones. I found a quiet place that I could work at, but still Skype with my colleagues to discuss some things. So after this experiment I was wondering if I could do the same, but for a longer period of time.
This longer period of time presented itself in march of 2017. I asked if I could once again work from abroad, but a little longer this time. My boss gave me permission so I booked my trip, a 9 week-long trip. For some this may seem like a very long time, and it is. But I knew that if I found the right environment, I could work just as well abroad as I would in the office. And I found this environment in the same library I used during the previous experiment. I worked from 8 to 4, so only a 5-hour difference with my colleagues and this worked well. We didn't need a lot of skype meetings, because I knew what was expected of me. We learned to communicate with each other really well through Jira and Slack, so everyone could move on with their work as usual. A small adjustment it took for me was to set up a small development server, so I could test the same code my colleagues were testing, but through the internet instead of a simple local network.
So for me, the experiments worked out really well, I knew I had enough discipline to keep working on a schedule, not taking unnecessary breaks, or treat the whole thing as a long vacation. But the experiment also worked out well, because my colleagues were fine with it and they were able to adjust to the new situation instantly and flawlessly. Working abroad only works if the whole team is aboard and is able to work in a non-traditional setting, or at least is able to adjust to the new situation pretty easily. So if you think you may want to try something like this sometime, try to start with a shorter period of time and see if you can do it. If you can, try longer, but don't try to force yourself to be productive if you know you don't have the discipline it takes to pull it off, it will not work.
]]>When I first started out with web design & development, CSS was this tool to make HTML pages look better. This was just before responsive web design started becoming the new industry standard for web design. Websites were zoomed out and broke on mobile devices, but this was normal at that time. The first time I started learning about the media queries within CSS was when I was introduced to Twitter Bootstrap, which I still use to this day. I still use this, because nowadays I'm more of a web developer than a web designer, so most CSS is not done by me anymore. Anyway, Twitter Bootstrap got me into using more of the features CSS3 offers. During my internship, I started to look for ways to make writing CSS more efficient, because I caught myself copying and pasting styles constantly. This is where I discovered LESS. LESS gave me what I wanted at the time, nested CSS. This helped me to reduce copying and pasting CSS for the most part.
Yet I felt like LESS wasn't quite there yet after I'd been using it for a few months. Jerke and a guy I worked with, led me to start out with SASS. This is where I knew I found what I was looking for all along. SASS offers nested CSS, but also functions and mixins. This helped me to (almost) never copy CSS again. One of the great things I have been using a lot more lately is functions. These help me to calculate exactly what margins, widths, and heights should be. Obviously, since I discovered Flexbox this has been used less, but it still has its applications. Before, when I started out with LESS, I used a program called Panda to compile my CSS files. This all changed when I switched to SASS. This is where Grunt came in. Grunt constantly watches and compiles my SASS files, so I can instantly test the changes I have made.
But back to using SASS rather than CSS. One of the advantages of using SASS (and LESS) is that I can easily include all "modules" (files) in one main file. This way I can make Grunt/Gulp/Webpack watch for changes in all files when it detects this, it compiles one new file. This helps to keep the file loading on the website efficient, but it doesn't trade efficiency for ease of use. What I mean with this is that when I was using normal CSS files, I needed to create multiple files to keep different functionalities separated. Obviously, this is not the way I wanted to work. With SASS it loads one single file, that is made up out of an unlimited amount of separate files. These I can easily manage these different files, while Grunt does all the compiling for me. That way ease of use is not compromised.
Another important thing about SASS that I mentioned earlier was the ability to use mixins in my files. This means that I can make standard "classes" within the SASS files and easily include these in styling for other elements. For example I want to make an orange button. In plain CSS you could make two different classes, one being ".button" and one being ".orange-button". The button class could make up the shape, font, and border styles. The orange-button class could just make the button orange and implement custom hover styles. What a mixin does is simpler. A mixin could be defined, taking two arguments, color and hover-color. Then in the orange-button class, the mixin could be called by using: button(orange, a-darker-orange). This reduces code and in my opinion, makes it easier to quickly style different elements.
Using SASS has made styling websites for me much more fun again. Before I started to use SASS I hated it, because I knew I needed to work in yet another file. I had to include this file in the HTML file and it was just tedious. Then I'm not even talking about the enormous CSS files that I was already using. Then I had to try to find the right class or ID and hope for the best this doesn't actually change anything else. Using SASS has made working on specific elements much more manageable, especially with Grunt having my back by compiling my main SASS file in the background. While using SASS it has reduced my frustrations with styling. It has made it easier, more efficient, and it gave me the opportunity to really structure the files how I want without having to load yet another file in my HTML file.
]]>The age (read: a few years) old question...how do you index a single page application? I have covered this topic briefly in a previous post about Isomorphic JavaScript. Single Page Applications are fantastic for the user experience, but of course, it also has a few disadvantages, one of which is actually a user experience killer. I will describe the two disadvantages that I have found using AngularJS (I know, I haven't completely switched to Angular 2, calm yourselves) and a solution to combat both of these problems in this post. So to get started, the disadvantages that I have found: the initial page load takes long which causes users to leave your website and indexing your website, or any social media sharing is a pain. I know these issues have mostly been resolved with Angular 2, but I know a lot of people out there are still using AngularJS, so this is why this is still relevant.
So the first disadvantage: the initial page load takes ages. This depends completely on the complexity of the app, but the one I work on is very complex, so it takes a good 4 - 5 seconds for the first draw to happen. This means that the user has a white screen of nothingness for about 5 seconds before the application actually bootstraps and shows a page. This is annoying, because it seems like the website is broken, therefore people leave your website before it's even loaded. A super simple way to at least let the users know that the page is loading...is to show a loading symbol. This very simple chance may retain some of the users that otherwise would've left your website. So that's step one. Step 2 is to either lazy load parts of your application or to make sure the scripts load as quickly as they possibly can, through a CDN or a static domain for example. These changes make leave the user with a white screen (with a loader in it) for about 3 seconds before the application has loaded and is ready for the user. It's a huge improvement, but it's not quite there yet.
The second disadvantage is the dynamic nature of a single page application. This means that none of the content on the pages is actually...well on the pages. The pages don't even exist. Everything is loaded on runtime. This causes the long initial load, but the swift interface after the scripts have loaded. It's also a very bad thing for SEO. Search engines and web crawlers are simply not built or prepared to deal with dynamic websites. They don't seem to understand that websites these days are very dynamic and often need to load a lot of javascript before they even work. If we take the Facebook and Twitter social cards as an example... well you won't see a page title, or a description, or a featured picture, or even any meta tags. The Facebook open graph crawler simply doesn't understand what to do with your web app.
So the (easy, not so easy) solution is to use server-side rendering or prerendering. These terms are two very different things. For a framework like AngularJS, in which the controllers and directives are tightly coupled with the DOM (the HTML) server-side rendering is almost impossible. So that option is out. That leaves us with prerendering pages. What does this mean? Well, it means that the server serves a static version of the page when this is desired. This is the most useful for Facebook's open graph crawler because it finally understands the data it's receiving. There is a title, description, tags, and images and it just works. A less and slightly strange solution could be to make the loading screens of your applications resemble the view it's about to serve. Right now there is one well-known prerender service available through prerender.io. I have been using their service for over a year and it works, well enough. It's open-source and can be pulled from Github.
However, I wanted something else, more of a Hybrid solution. Right now we use a sitemap generator that crawls all the pages and makes an enormous sitemap for Google. But to me, this seems like two jobs that could be combined into one. I mean if you're crawling every single page on the website anyway, why not prerender all those pages at the same time? Well, this is what I built. It's a solution that not only serves static pages when they're requested, but it's also a website indexer that's able to index any page on the fly in case it's not prerendered yet. So have I built this in Node? No, I have not. I actually built the crawler in Python. Why? Well, I've built a crawler in it before. That one was like most crawlers only able to index static pages. So I enhanced it with PhantomJS to be able to fully render dynamic pages and save them to a file. I then integrated this Python project in my Laravel project, synchronizing all of the cached pages to an S3 drive for swift requests. If you're interested to check it out, you can clone it from Github. If you think you can do better (and I think most of you can, because I'm a huge Python Noob), create pull requests to improve it with me. Anyway, this solution is able to crawl, index, and cache static files of the entire website, which I think is pretty cool!
]]>If you've ever built an application on a different operating system (OS) than the OS of your web hosting you will know the phenomenon of an application working flawlessly on your localhost and completely falling apart on your hosting server. I have definitely seen this happen to my applications a lot. I primarily work on a Windows machine, with a XAMPP installation for the server and database. This is how I test my applications and see if anything strange happens when I run it. When this is all perfect, I will deploy this to my remote server through Git. So far so good...until I pull the changes and see my application fall apart, because somehow an error or typo slipped in. One of the main things that I have seen happen to me is that file extensions of images, for example, are capitalized. On Windows this is no problem at all, it will run perfectly. However, Ubuntu (my main server OS) will start to throw errors. It will not find the image with the capitalized extension, because it doesn't exist. Only a version with a lowercase extension exists, but it's not the same and it just simply throws an error.
It's things like that here and there, for serious, but also little things that can be different for each seperate OS. Throw in another developer in the mix and you can very well end up with a project that has to be flawless on Mac OS, Windows, and a Linux distro at the same time. This used to be a tedious process, until things like virtual machines and Docker came along. Docker is a virtual OS on your Host OS and it will be identical on all the different Host operating systems. This causes all environments to work identically on all the different machines. This is great, but it has it's limitations in my opinion. Before you start to shoot me down with my crazy ideas, hear me out. I use virtual machines to create fully fledged Ubuntu environments on all the different Host operating systems. But Roelof... that's just making things harder for yourself! Well yes, sort of. You will need to adjust all the different host operating systems to be able to work flawlessly with the virtual machine environment and that could be a tedious process, but it can also be easy once you have a single working machine. In my case I wrote an entire installation script to install a particular project (this is of course interchangable with other Git projects) in a folder, complete with Apache2, Redis, Solr and MySQL. So installing the entire environment is as easy as running a single command and following a few simple instructions.
But why would you want a complete OS instead of just a lightweight Docker installation? Believe me, I tried to set up Docker and work with it like that, but I simply couldn't get it to work on my Windows machine and going with a virtual machine was just so much easier. Also, the installation process can be run on many different host operating systems and even on remote hosts. So the environment on all these machines is also identical. You don't have to think about bottlenecks in any way, shape, or form and it just works for me. Call me crazy, I won't blame you. Docker is probable far...far easier and I just overcomplicated it, but virtual machines do the exact same thing for me. Identical environments with identical permissions on all the different machines, so everything always works identically.
NOTE: This is Roelof from the future (January 2019)! Wow! Docker is indeed so much easier to use than a virtual machine. If you're reading this, don't bother to work with a virtual machine and use Docker right away. Implementing it into your existing workflow is much...much easier!
]]>Yes! I know! Another caching post! But caching is very, very important! With that out of the way, I'd like to explain why it's so important. Not just for your hardware, but also for your users. Before I explain my thoughts on caching, I should mention what my understanding and interpretation of the term "caching" is. Caching for me means to temporarily save data in a very easy to read and easy to process format, so it can be retrieved effortlessly and used right away. What I'm really saying with this is that data has been processed, formatted in a way your application will need it, and then saved to an entity. This entity can be several things, for example, a flat database table, a file of some sort (.txt or .json for example), or memory in Memcached (Memcache for Windows) or Redis.
So with that said, let's get right to it. As I mentioned in the previous paragraph, caching is important for your hardware. Not necessarily for the lifespan of it, but more for the resources that can be used for other tasks. If you'd have to query a database multiple times with it returning the same result, you found a task to use cache. Instead of constantly retrieving the same (static) data and processing it in the same way, thus wasting CPU/RAM resources, is costly. Instead, you can cache the data on the first request, and serve the data from the caching layer afterward. If you do this, you have just saved CPU/RAM resources that can be used for other tasks.
But it doesn't just save hardware resources, it's also quicker. Think about it: querying data from the database, processing this data, formatting the data to make it ready for usage versus requesting the data from a caching layer and receiving this data. This speed boost can significantly reduce loading times of your application, making the user experience better. I remember the huge difference in retrieval time between non-cached data and cached data. Non-cached could easily take 5 or 6 seconds on a single task, while the cached data was retrieved within a second. For most simple tasks this is still very slow, but it at least shows a significant decrease in loading times. This particular caching job caused a homepage of my app to be loaded a full 3 to 4 seconds faster. And this was before I switched from file caching to Redis caching, decreasing the cached requests by at least 50%.
I mentioned the user experience quickly before. There is nothing more annoying than long loading times and it will definitely make users leaving your website. Google said at their Chrome Dev Conference last year that if your app doesn't have a first draw (showing some kind of screen) within 3-5 seconds, 50% of visitors will leave your website. Now I'm not a user experience expert at all, so I can't confirm or deny this statement, but it makes sense. Often time I'll do the same thing. With that said, if you can make your app load quicker in any way, do it. If you have a lot of static data that needs to be loaded from a database upon first entry in your application, make sure to cache all of this. Make the first draw as quick as you can. When caching data to files or the database does not work well enough, try Memcached. When this is still not quick enough, go all out with Redis.
I can only praise caching and leave it at that, but that wouldn't paint the whole picture. Of course, there are also disadvantages to it. For example, it's very tough to cache data that changes a lot. It's definitely possible, but you end of having to synchronize the cached data with the new data on every single (important) change. This makes it hell for developers. My rule of thumb on this is: when the data can change at least once a day or it will need to be available right away when changed, do not cache it. If, however, the data never really changes or you really need a performance boost for something, go ahead and cache it. Make it easier for yourself, not harder. The amount of times I was wondering why the page wasn't updating because of cache is too high. Learn from my mistakes and don't cache anything if you're working on that particular part of your application.
]]>You! Yes, you there! Are you still using SQL queries to perform search requests in your database? How's that going for you? It's not as quick as you'd expect it to be right? This was the main problem why I decided to make the switch from using SQL queries in a relational database to a piece of software that's designed for search: Solr. Don't get me wrong, SQL queries or requests to any NoSQL database are perfectly fine if you have very specific search needs. For example: find the records belonging to this particular unique identifier. This is a wonderful solution. However, when your databases start to grow, the number of documents belonging to this particular unique identifier grow and having to do more JOIN operations for relational databases, you'll start to find bottlenecks.
JOIN operations, in particular, was an issue for me, the sheer amount of data that needed to be filtered destroyed my search performances. Often having to wait for 10 or more seconds before receiving any data whatsoever. My first solution was to make one enormous flat table in my SQL database. My thought behind this was to eliminate the JOIN queries to boost performance. This worked really well for months, it started with about 5000 records, which is an easy task, to say the least. However, this slowly grew over the months to a table of 200,000+ records. It was at this point I saw a slight performance hit, going from 2 to 4-6 seconds per request. This was definitely still less than before, but it was too slow for me. I eventually decided to make the switch when I had to implement real-time pricing for products. This meant calculating discounts, user credits, and a list of other things on the fly...for thousands of records. You can imagine the enormous hit this must've been. My search request times went from 4-6 seconds to about 45 seconds. This was the point at which I stopped, stood back, and made the decision to use two different systems, each designed for the purpose it serves. The relational database to save data, keeping it well structured, and Solr to index documents and make them searchable.
Now, if you know me, you know that I'm not the most technical programmer alive. I know how to do a bit of everything and aren't the best at all of them. However, I am someone that does not give up easily. Starting to learn how to set up Solr and Solarium (PHP library) was definitely not an easy task. In my opinion, I missed a lot of the documentation that I'm used to. I use Laravel and Laravel Lumen on a daily basis and these PHP (micro) frameworks are wonderfully documented. To start with the whole process, I set up a virtual Ubuntu box. I was already familiar with the Java programming language (on which Solr is built), so at least I wasn't completely clueless. Anyway, I set up the Solr server and created my first collection. This took me about 4 hours because I couldn't find the command for it and kept trying to use the GUI in the browser. After I found the command for it though, I was off to a flying start. I set up a username and password for it and then got started on Solarium.
Solarium is a PHP library to interact with a Solr server. This was easily installed through Composer. The configuration in Laravel itself was also very simplistic and I got a working connection with my Solr server within 30 minutes. But then I had to populate this brand new Solr server with data to index. I followed the Solarium documentation and was struggling. It's a useful guide, but it could be much more extensive to really help people that just start out with the library. However, once I finally got the first documents indexed, it was very easy to create new collections and populate these with documents.
So you might be wondering, well that's great and all, but did it actually help you with your project and was it worth it? To answer this question: Yes it did help my search performance. I went from 45 seconds to 600ms - 1.8 seconds. Pretty amazing performance boost right? And was it worth it? Absolutely! Besides being incredibly fast with normal search requests, you can very easily create facets, apply filters, group documents, etc. This meant that I could replace most of my manual filtering in PHP with the built-in filtering in Solr, further improving the search experience. Solr automatically sorts documents, so the most relevant documents will be displayed at the top. Before I had to do all of that manually because relevant documents in my case were heavily dependent on the distance between the requested location and the product. Solr does all of this for you, on the fly. Of course, this brings a lot of configuration in the form of search queries, but the possibilities are virtually limitless.
I'm very happy I made the switch. Not only did it speed up the search, but it also helped to analyze data, create reports, and speed up different parts of the application. Besides the obvious boost in speed, it also relieved my server load. The enormous SQL queries were putting a strain on my server, partially due to my own incompetence sometimes, but also due to the larger dataset. Solr took the strain on the server away, so now it can focus on more important things, like helping the user have a good experience within the application. So if you face the same problem, definitely give Solr a try and see if it benefits you in the same way it did me!
]]>A few months ago I started saving data in the browser. It wasn't for performance reason, but for functional reasons. I used LocalStorage for saving data that needs to be available to the web app and the user at any point, even after simple refreshes. This worked perfectly for a long time until the app grew larger and larger. At this point, I had 5 to 10 XHR requests per view. This was easily achievable in the beginning when it was 2 or 3. Most pages used the same data, the same non-changing data. This is when I started thinking about caching all of this data, making the experience for the user better, because the app would load faster. Not only the users are benefitting though, but the server also gets fewer requests, causing it to perform better for concurrent users.
So why was localStorage not good anymore? Well, there are two simple reasons for this. First of all, the limited storage space. LocalStorage data can only be saved as a string. The string lengths can only be so long before errors will start to occur. IndexedDB, on the other hand, saves data as actual objects. This way data can instantly be used in the application. Besides saving data as objects instead of a string, IndexedDB is asynchronous. This is important because it doesn't block the DOM. Not blocking the DOM is important when larger tasks are being processed and you don't want to confuse the user with a non-responsive application. LocalStorage and SessionStorage are both synchronous and do block the DOM, but they're not supposed to be used for larger tasks. IndexedDB is better for this task.
But why use IndexedDB at all? Isn't it just another layer that you need to pay attention to when you're developing an application? Absolutely, but also look at what it can do for you, as a developer, your server, and your users. If done correctly, you can harness IndexedDB to cache all your incoming "static" data. What this accomplishes is that you only have to load a specific resource once. When you loaded it from the server, you can save it and use the saved resource next time it's needed. This accomplishes two things. One, your server doesn't have to take duplicate requests from an individual user. Two, the requested page will load quicker, since the request to the server is no longer necessary and the resource is already saved to the RAM memory. This will be beneficial to the user experience of your application.
If you use localStorage, IndexedDB or nothing at all, making an application as efficient as possible is very important. It's important for your users, but also for your server. Nothing is worse than overloading the server or causing a bad user experience. Whatever you do, make sure you do it well. If that means you will need to use a caching solution like localStorage, sessionStorage, or IndexedDB (WebSQL is deprecated, don't use it) go for what best fits your needs. Do you need something simple like keeping data close between views? Give localStorage or sessionStorage a try. It's excellent for small tasks. If you need a more complex caching solution, that is capable of saving larger sets of data and does not block the DOM, IndexedDB is exactly what you should be using. To make this even better, use it in combination with service workers and you're on your way to make a web application that's not only available when you're online, but also when you have no internet connection whatsoever.
]]>Virtual Reality is getting some exposure lately. The HTC Vive, the Oculus Rift and even the Gear VR are all trying to accomplish themselves. The Gear VR is aimed at usage on Samsung phones, and the HTC Vive is best used with Steam games. But could VR play a bigger role for different uses other than gaming? Yes, it definitely could, if done properly. Just have a look at the way Jaguar launched their new I-Pace electric SUV. They used VR to display and let people experience diagrams, sketches and other models. That is exactly what what VR is all about, experiencing an event, being in a place, feeling one with it. This all sounds very visionairy, but what are the actual practical applications for VR in the world?
Stores, actual physical stores are still a thing, why? Well because you can see a product for yourself, feel the product, experience it. For a lot of people this is very important. It helps them believe they buy a product that's worth their money, worth their time. But what if the closest store that has a product you want or need is far away? What if you don't feel like going out of the house, but still want to be able to experience the product you're looking for? Well VR could fill this void. You would be able to "walk" through an online store, trying out different products before making a decision on which to buy. This could simply start out with being able to see the product from all angles, being able to rotate it and see what it's like.
What would this progression in online stores take? Well first of all, VR needs to be brought to the browser. This would take a lot of development from software engineers and funding from donors, investors, or believers. This would also need improved hardware for everyone wanting to use this technology. It'll need artists and game designers to make models of the products, develop controls to be able to interact with the models, hardware engineers to develop affordable VR gear to make the use of it widespread. Is this a hard task, a tough progression? Absolutely! Is this worth all the monetary investments? All the time spent on it? Only time can tell. Would this bring VR to the masses with applied, real world uses? Definitely!
So what's stopping us? Well cost first of all. VR gear is expensive, computers able to run VR games and programs are expensive. The hardware costs a fortune and the software is not where it should be just yet. But it's getting there. The software has already been improved a lot in the past two years, from simple roller coasters on the Oculus Rift, to pretty impressive games like job simulator on the HTC Vive and golf games that genuinely work and are quite fun. But that's the thing. Right now the hardware and software aren't take seriously by everyone yet. The same way smartphones were seen as unnecessary a few years ago. But it takes a lot of development and commitment from both developers and the community. Together the platform will improve, become more well known, be established as a serious platform for all kinds of uses. This is also why the car launch by Jaguar is such an important step for VR.
If all of the previous conditions have been met, where would that VR? VR could potentially take over the role of the personal computer. It sounds like a very distant future, but with the rate hardware currently is being developed, and it only going quicker and quicker, this could be as quick as a few years. It will not happen over night, a lot of doubt will and ridicule will come with it, but this is nothing different from the development of mobile phones and personal computers. It's a cycle that will be repeated over and over again for every sort of technology, and VR is no different.
Once, however it gets past this stage, when both hardware and software are developing quickly, efficiently, and are of high quality,spreading will start. When the costs of being able to use VR gear lower, it'll spread like a wildfire in a dried out countryside. VR will only become more prominent as more and more people are starting to experience real life applications for the use of VR. Once businesses are starting to use it to promote their services, helping customers and potential customers with their needs, it will get a real life application besides gaming and entertainment. It'll slowly become integrated in everyone's life. Keep in mind, this may be in the current state, with glasses headphones and controllers, but it very well may be more like hologrammic images of some sort. Either through Google Glass like products, or some other, newer invention. VR is coming, be ready for it.
]]>Building your own computer? Why would you do that if you can simply buy one in the store? You'd be instantly ready to use it and you know it'll work. But you know what? That would be the simple way out wouldn't it be? Isn't understanding how a computer is made, how it works, what the different components are for way more interesting? Isn't it way more useful to have a computer that is made for the exact purpose you need it for, nothing more, nothing less? Wouldn't you want to be able to control every aspect of the computer itself, even the look, costs and extra's? Well that's exactly why you should build your own computer.
Understanding how a computer works and is made can be both intriguing and extremely useful. Finding out how different parts work together, which parts are necessary to be compatible, and the influence of different combination of parts can be an interesting research project or experiment. But besides it being interesting, it can also be very useful in case a component breaks. Then you'll be able to find out more specifically how to fix a problem or how to figure out which part either causes the problem or is broken and needs to be replaced. Knowing exactly what parts you put in your computer helps with diagnosing problems and looking up ways to fix them.
Knowing which parts you can put together to achieve a certain goal or get a certain result is not only a fun experiment, but it can have a big impact on the way your new computer behaves doing certain tasks. If for example you want to do web development, getting a fast and expensive graphics card is simply not necessary. Web development is very harddrive, RAM, and CPU intensive. Lots of data will need to be saved on the harddrive and will also need to be retrieved. This data will also need to be processed when saving or retrieving it. This means that a computer for this specific task will require a fast processor, a fair amount of RAM memory (4 to 8GB at least), and a fast harddrive, such as a SSD (Solid State Drive) or M.2 drive. But if, for example, you'd want a computer to play videogames on, you're going to need a fast graphics card at the least. Every frame will need to be rendered to the screen without and frame lag. This means a fast graphics card, but also a good processor and RAM to make calculations in the background and to make sure tasks get executed correctly. In the case of a gaming computer, a harddrive is less important. You'll still need one with a lot of space to install all the games you'd want. You can install one or two on an SSD for optimal performance, but you really won't notice an enormous amount of extra smoothness.
One application which really needs a combination of all the best components is a video editing and rendering computer. You'll need a fast graphics card for rendering all the frames of your videos, a lot of RAM to process all the information you'll be saving to your fast hard drives, and a fast processor to manage all the different tasks that are coming in to play. This will probably the most expensive option out of the three described above.
Of course there are more applications you may want to build a computer for. Maybe you just want a very simple computer for text editing. In this case you can go easy on all parts and go for the bare minimum your OS (Operating System) needs in order to function well. That's where building your own computer has another advantage. You can make it as cheap or as expensive as you want. You won't need to fit a budget around a choice, but you fit your choice around your budget. If you say that, for example, you want to spend 500 euros/dollars on a computer, but you want to be able to play video games on it without any problems. Well you start to select a graphics card that will run every single game you play or plan on playing. After you have figured out which graphics card fits your needs, you can select a processor, the amount of RAM you think you'll require (please go for at least 8GB these days), and then a motherboard that'll connect these pieces together perfectly. You can go for a cheap harddrive, but please don't cheap out on a power supply. A great quality power supply is your best friend and will keep your computer happy. Aim for an 80 plus bronze quality checkmark or higher. You can even select which case you'd want. This can really go either way, a really cheap case, or a very fancy, but expensive one. Just make sure the motherboard you picked out, and all the other components, will fit in your chosen case. Usually this is marked by motherboard size (ATX, mini-ATX, micro-ATX, etc.).
As you can see, the possibilities are endless. Even if you decide to change your mind on the purpose of your computer, upgrading is easy. Just add a quicker processor, a faster graphics card, an SSD or M.2 drive or whatever else you may need to get your desired machine. And because you built your computer yourself in the first place, you'll know exactly which parts will be compatible, or at least you'll be able to find out with a bit of Googling. So next time you're thinking of buying a new PC, but you don't want to take the easy way out, or have a very specific need or budget, think about making your own computer. It can be a lot of fun, a great learning moment, or just an interesting experience.
]]>JavaScript, a language built to work on the client, in a browser, to make a website more interactive. Use Javascript to react to user's input, send XHR requests to PHP (or Rails/Java/etc.), receive data from the server, and complete a task with the provided data. This is the way Javascript has been used for a long, long time. But then, in 2008, NodeJS was launched. NodeJS, most web developers have heard of it, is a JavaScript framework running on the server. This means that Javascript is not just on the client side any more, it can also be a full fledged server. This has many benefits, including the following: it's blazing fast, the front-end and backend use the same language, and code can easily be shared between the front-end and backend. But what does this really mean?
Well to answer that question, let's use a front-end Javascript library as an example to be used next to Node for the server. Let's call this library ReactJS. ReactJS is a library created by Facebook to easily build user interfaces, through the use of Components. This means that you can easily make reusable components like a navigation bar, provide it with information from the server, like menu items, and render it on the screen. This is nice and well, but how does this answer the question? Well ReactJS comes with ways to convert the components within a view to strings. This means that NodeJS can serve this string as a response to requests to it's server. This can be useful for three different things.
With Frameworks like AngularJS the JavaScript won't be executed once a crawler hits your website. This causes misinterpreted meta tags, titles, content, and images. There is a solution for this, but it's complicated and just plain annoying. You're going to have to use PhantomJS to render the pages once a crawler hits your site and serve a static HTML version of the requested page. This is slow if this page is hit for the first time, because the page needs to be rendered on the fly. Once this is done, it is cached and the problem is not as apparent, but it's still a bottleneck for web applications built with AngularJS. Here's where ReactJS shines. Because the content of views can very easily be converted to strings, NodeJS can serve these static pages when the specified URL is requested. This doesn't just happen wehen a crawler hits the page, it happens all the time. This means that Google, Facebook, or any other service that uses a crawler to grab page information, will always be served with a static HTML page with all the required information.
Besides making it easy for crawlers to read the page content, NodeJS also helps with page refreshes. Imagine the following scenario. You made a React application with react routing. You hit the index page and everything is perfect. You can navigate between views and everything works perfectly fine. BUT THEN the user decides, for some reason, to refresh the page on the about page of your React application. A 404 page will be presented. But I made a route for the about page, why is it giving me a 404 page? Well for the simple reason that the entrance of you React application is under /. This means that unless you are on the home page and refresh, you will get a 404 page, because the root of your application can't be found. In AngularJS this can be solved by always pointing all page requests to the index.html page of your application and prepending the rest of the requested URL to the request in the Angular router. In React, in combination with Node, this is much, much simpler. What you can do through Node is to render the requested React view to a string, and simply serve this string as a response, just like how the SEO works. Because this time the crawler isn't the one requesting the page, but the user is, the browser will automatically render the HTML and the user will be presented with the right page. Once this HTML is rendered by the browser, React will automatically be kick started and ready for new requests and user actions.
Last but not least, loading speeds of pages can be drastically improved. Because NodeJS creates an HTML string on every page refresh, it can be very easily cached. This way Node can just look in the server memory and see if a cached version of the page exists. When it does, it can return this cached version instead of rendering the React view on the fly. Of course you should always set a maximum time between caches of pages, because otherwise it could be possible that your fancy updated pages will never actually be presented to the user and all your work will be for nothing. A good time guideline for pages that change often could be a few hours to a day. Other pages can be cached for a week or two. A good average is to cache pages for one day at a time, to make sure users get the updated experience soon enough, while still benefitting from the faster loading times of pages.
So what does it mean to share code between the server and the front-end? Well it means that user experiences are smooth, responds times are low, and implementing new features can be almost instantanious. There is no need to write the same logic twice (which I catch myself doing a lot in Angular), because the code for the front-end and backend is exactly the same. Because the code is exactly the same, SEO can be done easily, through server-side rendering, pages are always available, even after page refreshes, and page reloads can be made incredibly quick through page caching. Using the same language all across the application is quick, convenient, and it makes developing a delight, because you only need to know one language for everything.
]]>The title of the post is a thing I'm struggling with on a daily basis. On the one hand you have Windows, an operating system series that I've been using my entire life, coming from Windows 3.1 to the new Windows 10. You can say that I like my Windows OS's. On the other hand I'm also a web app developer and Linux based systems are perfect for running servers and they're just simply a joy to install new programs. How do I choose a system to stick with, or is a compromise between the two different systems possible?
One of the main advantages of Windows is it's compatibility with games. Gaming is becoming bigger and bigger on Linux based system, but it's no where near the level of Windows. Almost all major games are available for Windows and this is why I prefer Windows over Linux for gaming. Besides the compatibility with most major games, most programs are also compatible with the Windows operating system. This makes installing programs very simple for the average user.
Of course there are also disadvantages of using Windows, especially as a web developer. One of the disadvantages is that Windows is based on DOS, and not UNIX like MAC OS and Linux based operating systems. In most cases this is not a big problem, but using UNIX is much more intuitive than DOS in my opinion. Another one of the disadvantages of Windows is that it's not a free piece of software. Of course I gladly pay for software that makes your life easier, but everyone rather has stuff for free.
Because so many programs are compatible with Windows, and developing for a Windows machine is easy, it's very easy to catch viruses. On a Windows machine it's very easy to install software from non-trusted vendors, thus making it easy to install a virus on your system when you are not an experienced user. For most people this means installing a virus scanner and firewall is the only way to prevent this from happening. This can bring extra costs and unwanted software plans with it.
Where it's very simplistic on a Windows machine to install a program for the average user, on UNIX this is very easy for experienced users. You can simply install a program by using the command: apt-get install [package-name]
. Again, for the average user this may be very complicated and just not a wanted solution to install programs, but since I'm working with the command line for Node and PHP anyway, the transition is very natural.
Apart from the command line and UNIX, Linux just has so many different distributions (distro's)! There is always one that fits your needs or just works like you want it to work. And the best part of these distro's? They're all free of use, you don't have to pay anything for them, apart for a boot USB if you don't have one laying around. Because different distro's can behave differently with different pieces of software, these distro's have their own (official) repository of programs you can install. If the program you need is not available in this repository, you can just add one that does have it. Because the official repositories only carry trusted pieces of software, the chance to get a virus from installing programs is non existent. Unless a piece of software slipped through the cracks, or you install programs from a non-official repository. Only then you could get viruses in your system.
In some of the distro's you can expect useful pieces of software to be pre-installed, like Node or Python. With these programs pre-installed you can instantly start programming or set up a server for a project you're working on.
However, there are always disadvantages. One of them I have mentioned earlier, gaming. Even though support for Linux based system is getting better, it's still not at the level that gamers can expect from Windows. But then again, I haven't come across many people who use their Linux based operating system for gaming. So this disadvantage doesn't apply to everyone out there.
As mentioned before, installing programs on a Linux based system can be done through apt-get install [package-name]
. This is not for everyone, thus making the learning curve on some distro's quite steep for average users. Distro's like Ubuntu come with an app store, but this is not the case for all distro's so this could be a thing to look out for when choose one for your uses. And a last disadvantage I have encountered is the recognition of device like an iPhone. For me it showed up, but that was about it. It taks tinkering to get it to work and this can be a breaking point for some users who just want it to work. If you're a person like this, perhaps a Linux based system is not the right system for you.
So is a compromise possible between Windows and Linux? Making use of all the advantages of both systems while filling the disadvantages. Well yes there is. There are two scenarios that come to mind for me. A dual boot system, meaning you have both a windows system installed on your harddrive, as well as a Linux based system. On starting your PC you can choose which operating system to use. This can be great when you want a real experience, no slow loading times, but a real Windows or real Linux environment. If, however, you don't want to deal with this, a virtual machine is a great option. What this allows you to do is boot up a Linux based system inside a window on your Windows based system. This way you don't have to bother with installing a seperate operating system on your system and you can easily switch between your Windows system and your Linux based system. This could be ideal for testing purposes, application development, or just quick tasks.
So based on your needs you can go for either a Windows based system, a Linux based system or a combination between the two. Personally I have two different systems, one for Ubuntu (A Linux distro) and another one for Windows 10. This makes developing applications very easy, because I need some pieces of software that are only available on a Linux based system. I can just simply set up a connection between the two systems and they work with each other perfectly. But this is only an isolated example, there are a ton of different scenarios possible in which a combination of the two operating systems is a very desirable setup. But try and see for yourself. Give different distro's a try, try making connections, combine systems and see what you can do with them.
]]>Machine learning has been getting a lot of attention the last few years. Without most of us knowing it, it's been taking over our lives. Webshops, social media, and our phones, they all make use of it in some way. It sounds scary, but do the advantages outweigh the disadvantages?
I personally think that the advantages do outweigh the disadvantages. This is because machine learning, and with that big data, helps systems learn what you're like and how to help you the best it can. It will better know how to help you, adjust to you, and predict what you may be interested in. It takes a lot of faith in the system to allow it to collect data based on your behaviour within the system. But, if done correctly, this is a very valuable "personal assistent". An example for machine learning comes from a presentation by Werner Vogels (CTO Amazon) at The Next Web Conference in Amsterdam. He mentioned that small things like emails with suggestions based on your search history already collect a large amount of data. At Amazon they analyse which emails with which products get opened and clicked or deleted. This way a system learns which products you most likely care about more than other products.
Advertising is definitely a big part of using machine learning, but if you think about it: is it really advertising if you're really interested in a particular product or range of products and a system helps you out to find the best possible solution to fit what you need it for? It definitely is, but it's more than a shot in the dark, hoping someone grabs on and responds. It's a win win situation for both buyer and seller. The seller has a more confident chance of making a sale, and the buyer finds the best possible product he or she needs.
This is why I think machine learning will become much bigger than it already is. It won't just be used for advertising, but also for services like Netflix. Suggesting which movie or serie to watch at which time of the year or at a certain time of the day. It may even be able to suggest the right movies for a mood. The system will learn to help you pick the best series or movies that perfectly fit you and your situation at all times.
With all this data comes a lot of risk as well. Keeping the data secure is very important for the integrity of the system it's being used for. Anyone from the outside would be able to learn anything and everything about a person without having met this person. This is a scary thought. Not only could this result in dangerous (stalking) situations, it could also cause private and professional harm if it turns out a person has a private interest in non conventional movies, products, or services. This could cause loss of image for people, groups, businesses, and communities.
Machine learning and big data are incredibly useful when they are used in the right way. They can help make the lives of all kinds of people easier, but could also be a threat. Systems will be able to give very personalised advice, suggestions, and help in general. Security will need to be kept up-to-date at all times, because leaked data can cause harm on many different levels.
]]>Realtime information, partial page loads, quickly navigating to pages. Javascript has a lot to offer and is getting more and more popular. Websites aren't just plain Javascript and jQuery any more. More and more Javascript frameworks and libraries are being developed and are quickly taking over the roles of traditional web development techniques. The LAMP (Linux, Apache, MySQL, and PHP) stack is slowly losing ground to faster, more flexible ways of development, like the MEAN (MongoDB, ExpressJS, AngularJS, and NodeJS) stack. Javascript allows for quicker navigation through websites and applications and even allow developers to develop application for phones.
Speed and flexibility are nice and all, but how does this apply to a real world solution? Well first of all a single page application (SPA) invites the user for a more interactive experience. Because a SPA loads all its data on the initial loading process, loading times are shorter while navigating between pages. This behaviour is very similar to the process of loading a native mobile application. The application seems smoother to the users, unlike a typical website, in which you'll have to wait until the next page is loaded. A typical website doesn't feel dynamic, it feels like a stack of static pages, through which you can click. A native application feels more like a stack of layers, within layers. These layers can change and respond to user input. Something a typical website will never be able to do in a smooth manner. SPA's however are trying to replicate this dynamic feeling of a native application, but in a web environment. Through asynchronous calls and responsive Javascript, pages are loaded more quickly and are better able to respond to user input, improving the user experiences throughout the entire application.
Second of all, a single page application generally takes up less bandwidth and less computing power from the server. This is because of a very simple reason, the server doesn't constantly need to serve entire web pages. Instead it serves partial pages and loads data asynchronously, causing less strain from the I/O of CPU. Typical websites work synchronously, meaning one task gets completed before the other one starts processing. Javascript allows for asynchronous calls, this means the server can queue tasks. It will then complete one task from the queue after another per CPU thread. Meaning it will be able to do multiple tasks at the same time, causing the single page application to out perform any typical web application for the same task.
And finally, the third benefit highlighted in this post, convertibility. More and more companies are bringing out applications for iPhone, Android, etc. these days. Often, these applications are made from scratch and are being built by iOS and Android developers. This is a very costly process, often costing 50.000+ for a single application. What if there was a way to convert your existing website into a mobile application, without a lot of extra development? Well with single page applications, made with Javascript, this is possible. There are countless of programs that can help you convert a simple website to a hybrid application, PhoneGap for example. This program essentially builds a shell around your website, allowing it to execute like a mobile application on your phone. A single page application, built on Javascript can easily be converted into such an application, as long as the underlying API endpoints are accessible by the application. Of course this will never be as smooth as an actual native mobile application, but it offers for a quick and easy way of testing out a mobile application.
These points are just a few of many of the benefits of single page applications. But keep in mind that benefits always come with trade offs. There are also disadvantages of building single page applications, and these will be highlighted in a next blog post. The highlighted points in this article are some of the main parts I have encountered building several single page applications of the past year. I'm sure more advantages and disadvantages will show up, but only time will tell.
]]>Ever since I got a Raspberry Pi 2, in December 2015, I've been very interested in setting up a home server to be able to save all my files and access them from anywhere I want. Besides file storage I've been looking at ways to integrate this with my web development projects. Using the Pi 2 for this is great, especially being able to use SSH to remotely access it and use Git to load all the up-to-date files on it.
So an ideal home server would be able to do both of these things for me, both file storage and local web hosting. Additionally I would be able to use this home server for video streaming purposes. Originally I was using my Raspberry Pi for this, and this worked well for the web hosting, but not so much for file storage. It was a hassle to get my external hard drives hooked up to it, to manage all the folders and to keep it organized.
A solution presented itself to me in the form of FreeBSD, in particular FreeNAS. This way I can simply install the Operating System(OS) on a flash drive and boot the entire system from that, while using multiple hard drives for file storage. Looking at guides and videos on YouTube, I figured that 4 hard drives would be ideal for this setup. I will also need a sufficient amount of RAM memory and CPU power to be able to use a ZFS file system with FreeNAS. This way the data on the hard drives is safe in case 1 or 2 hard drives stops working.
On the downside, this system would mean it's not as energy efficient as a Raspberry Pi, but which system is? I will have to research how I can make sure this new system, built with FreeNAS, is quick, reliable, but also very energy efficient and low in power usage. More posts will follow on this and hopefully at that point I have more concrete ideas about system specifications, specific components I'd like to use and the estimated cost of this whole project.
]]>