Skip to content →

Jesper Jarlskov's blog Posts

Laravel file logging based on severity

Per default anything logged in a Laravel application will be logged to the file: storage/logs/laravel.log, this is fine for getting started, but as your application grows you might want something that’s a bit easier to work with.

Laravel comes with a couple of different loggers:

Everything it logged to the same file.
Everything is logged to the same file, but the file is rotated on a daily basis so you have a fresh file every day
Log to syslog, to allow the OS to handle the logging
Pass error handling to the web server

Having different handlers included provides some flexibility, but sometimes you need something more specific to your situation. Luckily Laravel has outsourced it’s logging needs to Monolog, which provides all the flexibility you could want.

In our case, logging to files was enough, but having all of our log entries in one file made it impossible to find anything. Monolog implements the Psr-3 specification which defines 8 severity levels, so we decided to split our logging into one dedicated file per severity level.

To add your own Monolog configuration, you just call Illuminate\Foundation\Application::configureMonologUsing() from your bootstrap/app.php.

To add a file destination for each severity level, we simply loop through all available severity levels and assign a StreamHandler to each level.

Simple as that.

Later we decided to also send all errors should also be sent to a Slack channel, so we could keep an eye on it, and react quickly if required. Luckily Monolog ships with a SlackHandler, so this is easy as well.

So with just a few registered handlers, we’ve tailored our error logging to a level that we feel suits our needs at the current time.

Notice how we only log to Slack when we’re not in debug mode, to filter out any errors thrown while developing.

Leave a Comment

HTTPS and HTTP/2 with letsencrypt on Debian and nginx


Most web servers today run HTTP/1.1, but HTTP2 support is growing. I’ve written more in-depth about the advantages of HTTP/2. Most browsers which support HTTP2 only supports it over encrypted HTTPS connections. In this article, I’ll go through setting up NGINX to serve pages over HTTPS and HTTP/2. I’ll also talk a bit about tightening your HTTPS setup, to prevent common exploits.


HTTPS security utilises public-key cryptography to provide end-to-end encryption and authentication to connections. The certificates are signed by a certificate authority, which can then verify that the certificate holder is who he claims to be.

Letsencrypt is a free and automated certificate authority who provides free certificate signing, which has historically been a costly affair.

Signing and renewing certificates from Letsencrypt is done using their certbot tool, this tool is available in most package managers which mean it’s easy to install. On Debian Jessie, just install the certbot like anything else from apt:

Using certbot you can start generating new signed certificates for the domains of your hosted websites:

For example

Letsencrypt certificates are valid for 90 days, after which they must be renewed by running

To automate this process, you can add this to your crontab, and make it run daily or weekly or whatever you prefer. In my setup it runs twice daily:

Note that Letsencrypt currently doesn’t support wildcard certificates, so if you’re serving your website from both and (you probably shouldn’t), you need to generate a certificate for each.


NGINX is a free open source asynchronous HTTP server and reverse proxy. It’s pretty easy to get up and running with Letsencrypt and HTTP/2, and it will be the focus of this guide.


To have NGINX serve encrypted date using our previously created Letsencrypt certificate we have to ask it to listen for HTTPS connections on port 443, and tell it where to find our certificates.

Open your site config, usually found in /etc/nginx/sites-available/<site>, here you’ll probably see that NGINX is currently listening for HTTP-connections on port 80:

So to start listening on port 443 as well, we just add another line:

The domain at the end would, of course, be the domain that your server is hosting.

Notice that we’ve added the ssl statement in there as well.

Next, we need to tell NGINX where to look for our new certificates, so it knows how to encrypt the data, we do this by adding

With the default Letsencrypt settings this would translate into something like:

And that’s really all there is to it.


Enabling HTTP/2 support in NGINX is even easier, just add an http2 statement to each listen line in your site config. So:

Turns into:

Now test out your new, slightly more secure, setup:

If all tests pass, restart NGINX to publish your changes.

Now your website should also be available using the https:// scheme… unless the port is blocked in your firewall (that could happen to anybody). If your browser supports HTTP2, your website should also be served over this new protocol.

Improving HTTPS security

With the current setup, we’re running over HTTPS, which is a good start, but a lot of exploits have been discovered in various parts of the implementation, so there are a few more things we can do to harden the security. We do this by adding some additional settings to our NGINX site config.

Firstly, old versions of TLS is insecure, so we should force the server to not revert:

If you need to support older browsers like IE10, you need to turn on older versions of TLS;

This in effect only turns off SSL encryption, it’s not optimal, but sometimes you need to strike a balance, and it’s better than the NGINX default settings. You can see which browsers supports which TLS versions on caniuse.

When establishing a connection over HTTPS the server and the client negotiates which encryption cypher to use. This has been exploited in some cases, like in the BEAST exploit. To lessen the risks, we disable certain old insecure ciphers:

Normally HTTPS certificates are verified by the client contacting the certificate authority. We can turn this up a bit by having the server download the authority’s response, and supply it to the client together with the certificate. This saves the client the roundtrip to the certificate authority, speeding up the process. This is called OCSP stapling and is easily enabled in NGINX:

Enabling HTTPS is all well and good, but if a man-in-the-middle (MitM) attack actually occurs, the perpetrator can decrypt the connection from the server, and relay it to the client over an unencrypted connection. This can’t really be prevented, but it is possible to instruct the client that it should only accept encrypted connections from the domain; this will mitigate the problem anytime the clients visits the domain after the first time. This is called Strict Transport Security:

BE CAREFUL! with adding this last setting, since it will prevent clients from connecting to your site for one full year if you decide to turn off HTTPS or have an error in your setup that causes HTTPS to fail.

Again, we test our setup:

If all tests pass, restart NGINX to publish your changes (again, consider if you’re actually ready to enable Strict Transport Security).

Now, to test the setup. First we run an SSL test, to test the security of our protocol and cipher choices, the changes mentioned here improved this domain to an A+ grade.

We can also check our HTTPS header security. Since I still havn’t set up Content Security Policies and HTTP Public Key Pinning I’m sadly stuck down on a B grade, leaving room for improvement.

Further reading

Leave a Comment


As a new feature this year, I’d like to start doing monthly summaries of interesting articles I read throughout the month, that I feel is worth returning back to. The lists will include a bunch of different articles revolving around web development, but will probably mainly focus on PHP. It might also show that I work with Laravel in my day job.

I will include anything that pops up on my radar during the month, even though some of it might be old news if I feel that it’s still relevant.

I believe one of the best ways to learn is to look at how other does the job you’re trying to become better at. I’ve had a hard time finding complete Open Source Laravel projects, but OpenLaravel showcases just that.

Besides diving into full project code bases, reading other peoples’ experiences with diving into specific areas of a code base can also be highly valuable; that’s why I’ve written about diving into Laravel Eloquent’s getter and setter magic. In the same sense, I found it interesting to read a breakdown of hacking a Concrete5 plugin.

Since we’re already on the topic of security, the people from Paragon IE wrote a PHP security pocket guide.

I like to consider myself a pragmatic programmer, who focuses on getting the right things to work in a proper way, which means not necessarily trying to solve every imaginable scenario that might be a problem someday. In other words, I try to practice YAGNI.

Shawn McCool has gathered a tonne of interesting information regarding the Active Records design pattern (the base for Laravel’s Eloquent ORM) in his giant all things Active Record article.

The last link for this month is a short story about why a bug fix is called a patch.

Leave a Comment

Exporting from MySQL to CSV file

We often have to export data from MySQL to other applications; this could be to further analyse the data, gather user emails for a newsletter or similar. Usually these applications, CSV is probably the most common format for data exporting like this.

Luckily SQL select output, with its rows and columns, if well suited for CSV output.

You can easily export straight from the MySQL client to a CSV file, by appending the expected CSV format to the end of the query:

For example:

The last three lines can be customised based on your needs.

NOTE: The default MySQL settings only allows writing files to the /var/lib/mysql-files/ directory.

If you do not have permissions to write files from the MySQL client, the same can be accomplished from the commandline:

For example:

The regex is, of course, courtesy stackoverflow.

Regex Explanation:

s/// means substitute what’s between the first // with what’s between the second //
the “g” at the end is a modifier that means “all instance, not just first”
^ (in this context) means beginning of line
$ (in this context) means end of line
So, putting it all together:

s/’/\’/ replace ‘ with \’
s/\t/\”,\”/g replace all \t (tab) with “,”
s/^/\”/ at the beginning of the line place a ”
s/$/\”/ at the end of the line place a ”
s/\n//g replace all \n (newline) with nothing

Leave a Comment

Lazy loading, eager loading and the n+1 problem

In this article I’ll look into a few data loading strategies, namely:

I’ll talk a bit about the pros and cons of each strategy, and common issues developers might run into. Especially the n+1 problem, and how to mitigate it.

The examples in the post are written using PHP and Laravel syntax, but the idea is the same for any language.

Lazy Loading

The first loading strategy is probably the most basic one. Lazy loading means postponing loading data until the time where you actually need it.

Lazy loading has the advantage that you will only load the data you actually need.

The n+1 problem

The problem is with lazy loading is what is known as the n+1 problem. Because the data is loaded for each user independently, n+1 database queries is required, where n is the number of users. If you only have a few users this will likely not be a problem, but this means that performance will degrade quickly when the number of users grows.

Eager Loading

In the eager loading strategy, data loading is moved to an earlier part of the code.

This will solve the n+1 problem. Since all data is fetched using a single query, the number of queries is independent of the number of items fetched.

One problem with eager loading is that you might end up loading more data than you actually need.

Often data is fetched in a controller and used in a view, in this case, the two will become very tightly coupled, and every time the data requirements of the view changes, the controller needs to change as well requiring maintenance in several different places.

Lazy-eager loading

The lazy-eager loading combines both of the strategies above; loading of data is postponed until it is required, but it is still being prepared beforehand. Let’s see an example.

As usual, a simple example like this only shows part of the story, but the $users list would usually be loaded in a controller, away from the iterator.

In this case, we’re keeping related code close together which means you’ll likely have fewer places to go to handle maintenance, but since the data is often required in templates we might be introducing logic into our templates giving the template more responsibilities. This can both make maintenance and performance testing harder.

Which strategy to choose?

This article has introduced three different data loading strategies, namely; lazy loading, eager loading and lazy-eager loading.

I’ve tried to outline som pros and cons of each strategy, as well as some of the issues related to each of them.

Which strategy to choose depends on the problem you’re trying to solve, since each strategy might make sense in their own cases.

As with most other problems I’d usually start by implementing whichever strategy is simplest and fastest to implement, to create a PoC of a solution to my problem. Afterwards I’d go through a range of performance tests, to see where the bottlenecks in my solution appears, and then look into which alternate strategy will solve that bottleneck.

Leave a Comment

HTTP/2 – What and why?

What is HTTP?

HTTP, HyperText Transfer Protocol, is an application layer network protocol for distributing hypermedia data. It is mainly used for serving websites in HTML format for browser consumption.


The newest version of HTTP is HTTP/2. The protocol originated at Google under the name SPDY. The specification work and maintenance has later been moved to the IETF.

The purpose of HTTP/2 was to develop a better performing protocol, while still maintaining high-level backward compatibility with previous versions of the HTTP protocol. This means compliance to the same HTTP methods, status codes, etc.

New in HTTP/2

HTTP/2 maintains backward compatibility in the way that applications served over the protocol does not require any changes to work for the new version. But the the protocol contains a range of new performance enhancing features that applications can implement on a case-by-case basis.

Header compression

HTTP/2 supports most of the headers supported by earlier versions of HTTP. As something new, HTTP/2 also support compressing these headers to minimize the amount of data that has to be transferred.

Request pipelining

In earlier versions of HTTP, one TCP connection equaled one HTTP connection. In HTTP/2 several HTTP requests can be sent over the same TCP connection.

This allows HTTP/2 to bypass some of the issues in previous version of the protocol, like the maximum connection limit. It also means that all of the formalities of TCP, like handshakes and path MTU discovery, only has to be done once.

No HOL blocking

Head of Line block occurs when some incoming data must be handled in a specific order, and the first data takes a long time, forcing all of the following data to wait for the first data to be processed.

In HTTP/1 HOL blocking happens because multiple requests over the same TCP connection has to be handled in the order they were received. So if a client makes 2 requests to the same server, the second request won’t be handled before the first request is been completed. If the first request takes a long time to complete, this can will hold up the second request as well.

HTTP/2 allows pipelining requests over a single TCP connection, allowing the second request to be received at the same time, or even before, the first request. This means the server is able to handle the second request even while it’s still receiving the first request.

Server Push

In earlier versions, a typical web page was rendered by in a sequential manner. First the browser would request the page to be rendered. The server would then respond with the main content of the page, typically in HTML format. The browser would then start rendering the HTML from the top and down. Every time the browser encountered an external ressource, like a css file, an external JavaScript file, or an image, a new request would be made to the server to request the additional resource.

HTTP/2 offers a feature called server push. Server push allows the server to respond to a single request with several different resources. This allows the server to push required external CSS and JavaScript files to the user on a normal page request, thus allowing the client to have the resources at hand during the rendering, preventing the render-blocking additional requests that otherwise had to be done during page rendering.


The HTTP/2 specification, like earlier versions, allows HTTP to function both over a normal unencrypted connection, but also specifies a method for transferring over a TLS-encrypted connection, namely using HTTPS. The major browser vendors, though, have decided to force higher security standards, by only accepting HTTP/2 over encrypted HTTPS connections.

Leave a Comment

PHP regular expression functions causing segmentation fault

We recently had an issue where generating large exports for users to download would suddenly just stop for no apparent reason. Because of the size of the exports the first thought was that the process would time out, but the server didn’t return a timeout error to the browser, and the process didn’t really run for long enough to hit the time limit set on the server.

Looking through the Laravel and Apache vhost logs where the errors would normally be logged didn’t provide any hints as to what the issue was. Nothing was logged. After some more digging I found out that I could provoke an error in the general Apache log file.

It wasn’t a lot to go on, but at least I had a reproducible error message. A segmentation fault (or segfault) means that a process tries to access a memory location that it isn’t allowed to access, and for that reason it’s killed by the operating system.

After following along the code to try to identify when the segfault actually happened, I concluded it was caused by a call too preg_match().

Finally I had something concrete to start debugging (aka Googling) for, and I finally found out which Stackoverflow question contained the answer.

In short the problem happens because the preg_* functions in PHP builds upon the PCRE library. In PCRE certain regular expressions are matched by using a lot of recursive calls, which uses up a lot of stack space. It is possible to set a limit on the amount of recursions allowed, but in PHP this limit defaults to 100.000 which is more than fits in the stack. Setting a more realistic limit for pcre.recursion_limit as suggested in the Stackoverflow thread solved my current issue, and if the limit should prove to be too low for my system’s requirements, at least I will now get a proper error message to work from.

The Stackoverflow answer contains a more in-depth about the problem, the solution and other related issues. Definitely worth a read.

Leave a Comment

OOP Cheatsheet

This is a small OOP (Object oriented programming) cheatsheet, to give a quick introduction to some of the commonly used OOP terminology.

Line Concept Definition
1 Function A function is defined in the global namespace and can be called from anywhere.
2 Variable The variable is enclosed in the function definition, so it can only be used by that function.
4 Variable access Here the value contained in the variable is accessed.
7 Function call This executes the function.
9 Class definition A class is a blueprint that you can generate objects from. All new objects based on a class will start out with everything that has been defined in the class definition.
10 Property A property is like a variable, but is accessible from the entire object it is defined in. Objects can have different visibilities.
12 Method A method is a function defined inside a class. It is always accessible to all objects of the class, and depending on it’s visibility it might, or might not, be accessible from outside the class.
13 Property usage An object can reach it’s own properties using the $this-> syntax.
17 Object instantiation This is an object is created based on a class definition.
19 Method call The method Something::returnSomethingElse() is called on the newly created object. The method has it’s visibility set to “public”, hence it can be called from outside the object itself.
21 Property access This is how the property Something::$somethingElse is accessed from outside the object. But in this case the property has the visibility protected which means it can’t be accessed from outside the object itself, hence this will cause PHP to fail.
Leave a Comment

Validating email senders

Email is one of the most heavily used platforms for communication online, and has been for a while.

Most people using email expect that they can see the who sent the email by looking at the sender field in their email client, but in reality the Simple Mail Transfer Protocol (SMTP) that defines how emails are exchanged (as specified in rfc 5321) does not provide any mechanism for validating that the sender is actually who he claims to be.

Not being able to validate the origin of an email has proven to be a really big problem, and provides venues for spammers, scammers, phishers and other bad entities, to pretend to be someone that they are not. A couple of mechanisms has later been designed on top of SMTP to try and add this validation layer, and I’ll try to cover some of the more widely used ones here.

Since we’re talking about online technologies, get ready for abbr galore!

Table of contents:

Sender Policy Framework (SPF)

The Sender Policy Framework (SPF) is a simple system for allowing mail exchangers to validate that a certain host is authorised to send out emails from a specific host domain. SPF builds on the existing Domain Name System (DNS).

SPF requires the domain owner to add a simply TXT record to the domain’s DNS records, which specifies which hosts and/or IPs are allowed to send out mails on behalf of the domain in question.

An SPF record is a simple line in a TXT record on the form:

For example:

When an email receiver receives email the process is:

  1. Lookup the sending domain’s DNS records
  2. Check for an spf record
  3. Compare the spf record with the actual sender
  4. Accept / reject the email based on the output of step 3

For more details check out the SPF introduction or check your domain’s setup with the SPF checker.

DomainKeys Identified Mail (DKIM)

DomainKeys Identified Mail (DKIM) is another mechanism for validating whether the sender of an email is actually allowed to send mails on behalf to the sender. Similarly to SPF, DKIM builds on DNS, but DKIM uses public-key cryptography, similar to TLS and SSL which provides the basis for HTTPS.

In practice DKIM works by having the sender add a header to all emails being send which a hashed and encrypted version of a part of the body, as well as some selected headers. The receiver then reads the header, queries the sending domain’s DNS for the public key to decrypt the header, and checks the validity.

For more details, WikiPedia provides a nice DKIM overview.

Domain Message Authentication Reporting & Conformance (DMARC)

Domain Message Authentication Reporting & Conformance (DMARC) works in collaboration with SPF and DKIM. In it’s simplest form, DMARC provides a rules for how to handle messages that fail their SPF and/or DKIM checks. Like the other two, DMARC is specified using a TXT DNS record, on the form

For instance

This specifies that the record contains DMARC rules (v=DMARC1), that nothing special should be done to emails failing validation (p=none), that any forensic reports should be sent to ( and that any aggregate reports should be sent to (

Putting DMARC in “none”-mode is a way of putting DMARC in investigation mode. In this mode the receiver will not change anything regarding the handling of failing mails, but error reports will be sent to the email adresses specified in the DMARC DNS rules. This allows domain owners to gather data about where emails sent from their domains are coming from, and allows them to make sure all necessary SPF and DKIM settings are aligned properly, before moving DMARC into quarantine mode, where receivers are asked to flag mails failing their checks as spam, or reject mode, where failing emails are rejected straight away.

For a bit more detail, check out the DMARC overview or check the validity of your current setup using the DMARC validator.

Leave a Comment

Wrapping up Laracon EU 2016

Last week I spend some days in Amsterdam attending Laracon EU 2016. It was two very interesting days, and I think the general level of the talks was very high compared to other conferences I’ve attended. The location and the catering was also really good, and I was impressed with how smooth it all seemed to go, at least for us as participants. Good job!

Here I’ve tried to gather up some of my notes from the talks I saw. It’s mainly meant to serve as my personal notes, but I also try to give some recommendations as to which talks are worth watching when the videos are released.

The event can be found at The videos havn’t been released yet, but you can find the Laracon US videos.

Taylor Otwell – Keynote / Laravel 5.3

Taylor continued his Laracon US keynote, and highlighted some of the other new features Laravel 5.3 will bring.

The emphasis on his talk was on:

  • Echo – Which makes it easy to provide real-time push notifications to online users of your app, for example straight to the website, or through notifications on your phone. One major advantage in this update is the easy of setting up private notification channels.
  • Notifications – An easier interface for pushing user notifications to various service like Slack and email. The interface makes it easy to create integrations to new services.

Hannes van de Vreken – IoC Container Beyond Constructor Injection

Hannes did an interesting talk on IoC containers. The first part of the talk was a general introduction to dependency injection and IoC containers and the purpose of both concepts. Afterwards he dove into some more advances subjects like contextually binding interfaces to implementations and container events which can be used to lazy load services or changing the settings of a service before injection.

He also talked about lazy loading services by using method injection and using closures for lazy loading services, not only when the requiring service is instantiated, but all the way to the point where the injected service is actually being used, like it’s done in Illuminate\Events\Dispatcher::setQueueResolver().

The talk definitely gave me some takeaways I want to look more into.

Mitchell van Wijngaarden – The past is the future

Mitchell did a talk on event sourcing, a topic I had only recently heard about for the first time. It was an interesting talk with a lot of bad jokes and puns to keep you awake (or whatever the purpose was) which gave a nice introduction to the subject, how it could be utilized and some of the pros of using it.

I think event sourcing is a pretty interesting concept, and I’d like to see it used in a larger project to see how it holds up. To me it sounds like overkill in many situations, but I’ve definitely done projects where knowing about it would have helped simplify both the architecture and the logic a great deal.

An interesting talk for developers working with transaction-based domains or who just wants some new inspiration.

Lily Dart – No excuses user research

Lily talked about the importance of user research and the importance of knowing what your users actually want, instead of just guessing. It would have been nice with some actual examples of projects where it had been used, how and the results of the research, but I’m already pretty convinced that data as proof is better than anyones best guess so this talk only served to make this belief stronger.

She provided some easy ways to start collection data about your customers wants and behaviour that I think could be interesting to look into:

  • Bug reports – Bug reports contain a wealth of knowledge about what your users are struggling with. Often we as developers can have a tendency to push aside reports, big or small, as simply being because the user didn’t understand how something works, but this is often caused by usability issues in the system they’re using. Lily suggested started tagging all bug reports, to provide an overview of which parts of your system that maybe should be easier to understand.
  • Transactional audits – Transactional audits are the small feedback forms we sometimes meet after completing a transaction. Many help systems, for instance, include a small form at the bottom of each help section asking the simple question “Did this page answer your question?”, where if we answer no, we’re asked what was missing, or what we were actually looking for.
  • Search logs – If your website has a search engine, logging all searches can also provide some interesting knowledge, both about what your users actually want to know more about, but also about what they are struggling to find out more about. This can give you an idea about things like features that are hard for the user to understand, or issues in your site architecture that makes it hard to find certain information in your website, or maybe even tell you more about what subjects people would like your website to expand more about.

A really interesting talk I’d recommend to anyone working with websites (developers, marketing, managers etc).

Evan You – Modern frontend development with vue.js

Evan gave an introduction to the vuejs framework, where it came from and some of the architecture decisions it’s based on. It was a very theoretical talk that provided some good background knowledge, but I had hoped for a more hands-on approach, and some more code, but I believe he did that at his Laracon US talk so I should probably watch that as well. Even so the talk still provided some good insights that I’m sure will help me when I’ll start looking into using Vue, which will hopefully happen soon.

It was an interesting talk if you’d like some background for Vue and it’s structure, but if you just want to learn how to get started using it, there’s probably better talks out there, like the ones from Laracon US.

Matthias Noback – Please understand me

Matthias gave a talk to remind us all that working as a developer isn’t only about developing software. On the personal side it’s important to work in a place where you feel appreciated and respected, and that you have access to the tools you need to do your work.

On the other hand you also need to do your part to make the job meaningful. Try to figure out who the customers are, and what they think about and want. Knowing who you’re doing the job for, and why they need it, will help you understand what actually needs to be done, and will help you make better decisions about your product. In the same way it’s useful to get to know your manager, that will make communication easier when the deadlines draw closer.

If you really want to be taken serious you also need to take yourself and your job serious. Take responsibility for your job. Show up, set deadlines and meet them, and deliver high quality work. Take your colleagues, managers and customers seriously, don’t be a ‘developer on a throne’.

There was nothing particularly new in the talk, but I believe it serves as a good reminder of some things that many either ignore or take for granted. A good talk to watch for any developer or people managing developers.

Abed Halawi – The lucid architecture for buiding scalable applications

Abed talked about what he described as lucid architecture and the general thoughts about the problem him and his team were building. He described an architecture as a an expression of a view point that is used to communicate structure and should hopefully help eradicate homeless code by providing every piece of code with one obvious unquestionable place to reside.

The requirements for Abed’s team’s architecture was that it should kill legacy code, define terminology, be comprehensive without limitations, complement their framework’s design and perform at scale.

The lucid architecture consists of 3 parts

  • Features – Each feature fulfills one business requirement. Features are grouped into domains, and a feature works by running a range of jobs in order. CreateArticleFeature could be a feature name.
  • Jobs – Each job handles one step required to fulfill a feature ie. validation. SaveArticleJob could be a job name. Each job can be used by several different features.
  • Service – A service is a collection of features. Features are net reused between services. The website service and the API service would each have their own CreateArticleFeature. Jobs can be reused, though.

In lucid architecture controllers are VERY slim, each controller serves one feature, and does nothing else, everything from validation to domain object creation/updating and response preparation are handled by different jobs launched by the feature.

I found the idea pretty interesting, especially since it removes some of the overlap of different concepts by providing a specific use case for each domain level. I also like how all logic is handled in specific and clearly separated jobs making it easy to move jobs to queues if necessary. It looks a bit like the direction that we’re currently taking our code base at my job, though we’re not quite so radical in our approach.

An interesting talk to watch if you want some new inspiration regarding architecture.

Gabriela D’Avila

Gabriela talked about some of the changes coming to MySQL in version 5.7. A lot of the talk went a bit over my head since I’m not a database specialist, but it was interesting and gave some good pointers for things to look more into.

MySQL 5.7 enables the NO_ZERO_DATE option by default, which might have implications for our application since we actually use zero dates.

MySQL has a concept of virtual columns, that can calculate values based on the value of other columns, like concatenating a first_name and a last_name column into a full_name. Iirc it can also get attribute values from the new JSON type columns, which would be pretty cool.

Jeroen V.D. Gulik – How to effectively grow a development team

Jeroen gave a talk about developer culture in teams, and how he had built his developer team at Schiphol airport. He talked a lot about what culture is, what developer culture is and how to foster a positive culture, and how that culture is related to developer happiness. He had a lot of good points, too many to note here, and I’d recommend anyone interested in company culture to watch this talk, it’s relevant to anyone from developers through developer managers to higher level managers in technology focused companies.

Adam Wathan – Curing the common loop

The last talk I saw was Adam Wathan’s talk about using loops when programming, versus using higher order functions, which is functions that takes other functions as parameters and/or returns functions. The base of the talk was the 3 things Adam claims to hate:

  • Loops
  • Conditionals
  • Temporary variables

I can see the point about the code getting a lot more readable and I like how the approach requires a radically different thought process compared to how I’d normally do things, I’d definitely recommend any developer to watch this talk.

Leave a Comment