Skip to content →

Jesper Jarlskov's blog Posts

Wrapping up Laracon EU 2016

Last week I spend some days in Amsterdam attending Laracon EU 2016. It was two very interesting days, and I think the general level of the talks was very high compared to other conferences I’ve attended. The location and the catering was also really good, and I was impressed with how smooth it all seemed to go, at least for us as participants. Good job!

Here I’ve tried to gather up some of my notes from the talks I saw. It’s mainly meant to serve as my personal notes, but I also try to give some recommendations as to which talks are worth watching when the videos are released.

The event can be found at The videos havn’t been released yet, but you can find the Laracon US videos.

Taylor Otwell – Keynote / Laravel 5.3

Taylor continued his Laracon US keynote, and highlighted some of the other new features Laravel 5.3 will bring.

The emphasis on his talk was on:

  • Echo – Which makes it easy to provide real-time push notifications to online users of your app, for example straight to the website, or through notifications on your phone. One major advantage in this update is the easy of setting up private notification channels.
  • Notifications – An easier interface for pushing user notifications to various service like Slack and email. The interface makes it easy to create integrations to new services.

Hannes van de Vreken – IoC Container Beyond Constructor Injection

Hannes did an interesting talk on IoC containers. The first part of the talk was a general introduction to dependency injection and IoC containers and the purpose of both concepts. Afterwards he dove into some more advances subjects like contextually binding interfaces to implementations and container events which can be used to lazy load services or changing the settings of a service before injection.

He also talked about lazy loading services by using method injection and using closures for lazy loading services, not only when the requiring service is instantiated, but all the way to the point where the injected service is actually being used, like it’s done in Illuminate\Events\Dispatcher::setQueueResolver().

The talk definitely gave me some takeaways I want to look more into.

Mitchell van Wijngaarden – The past is the future

Mitchell did a talk on event sourcing, a topic I had only recently heard about for the first time. It was an interesting talk with a lot of bad jokes and puns to keep you awake (or whatever the purpose was) which gave a nice introduction to the subject, how it could be utilized and some of the pros of using it.

I think event sourcing is a pretty interesting concept, and I’d like to see it used in a larger project to see how it holds up. To me it sounds like overkill in many situations, but I’ve definitely done projects where knowing about it would have helped simplify both the architecture and the logic a great deal.

An interesting talk for developers working with transaction-based domains or who just wants some new inspiration.

Lily Dart – No excuses user research

Lily talked about the importance of user research and the importance of knowing what your users actually want, instead of just guessing. It would have been nice with some actual examples of projects where it had been used, how and the results of the research, but I’m already pretty convinced that data as proof is better than anyones best guess so this talk only served to make this belief stronger.

She provided some easy ways to start collection data about your customers wants and behaviour that I think could be interesting to look into:

  • Bug reports – Bug reports contain a wealth of knowledge about what your users are struggling with. Often we as developers can have a tendency to push aside reports, big or small, as simply being because the user didn’t understand how something works, but this is often caused by usability issues in the system they’re using. Lily suggested started tagging all bug reports, to provide an overview of which parts of your system that maybe should be easier to understand.
  • Transactional audits – Transactional audits are the small feedback forms we sometimes meet after completing a transaction. Many help systems, for instance, include a small form at the bottom of each help section asking the simple question “Did this page answer your question?”, where if we answer no, we’re asked what was missing, or what we were actually looking for.
  • Search logs – If your website has a search engine, logging all searches can also provide some interesting knowledge, both about what your users actually want to know more about, but also about what they are struggling to find out more about. This can give you an idea about things like features that are hard for the user to understand, or issues in your site architecture that makes it hard to find certain information in your website, or maybe even tell you more about what subjects people would like your website to expand more about.

A really interesting talk I’d recommend to anyone working with websites (developers, marketing, managers etc).

Evan You – Modern frontend development with vue.js

Evan gave an introduction to the vuejs framework, where it came from and some of the architecture decisions it’s based on. It was a very theoretical talk that provided some good background knowledge, but I had hoped for a more hands-on approach, and some more code, but I believe he did that at his Laracon US talk so I should probably watch that as well. Even so the talk still provided some good insights that I’m sure will help me when I’ll start looking into using Vue, which will hopefully happen soon.

It was an interesting talk if you’d like some background for Vue and it’s structure, but if you just want to learn how to get started using it, there’s probably better talks out there, like the ones from Laracon US.

Matthias Noback – Please understand me

Matthias gave a talk to remind us all that working as a developer isn’t only about developing software. On the personal side it’s important to work in a place where you feel appreciated and respected, and that you have access to the tools you need to do your work.

On the other hand you also need to do your part to make the job meaningful. Try to figure out who the customers are, and what they think about and want. Knowing who you’re doing the job for, and why they need it, will help you understand what actually needs to be done, and will help you make better decisions about your product. In the same way it’s useful to get to know your manager, that will make communication easier when the deadlines draw closer.

If you really want to be taken serious you also need to take yourself and your job serious. Take responsibility for your job. Show up, set deadlines and meet them, and deliver high quality work. Take your colleagues, managers and customers seriously, don’t be a ‘developer on a throne’.

There was nothing particularly new in the talk, but I believe it serves as a good reminder of some things that many either ignore or take for granted. A good talk to watch for any developer or people managing developers.

Abed Halawi – The lucid architecture for buiding scalable applications

Abed talked about what he described as lucid architecture and the general thoughts about the problem him and his team were building. He described an architecture as a an expression of a view point that is used to communicate structure and should hopefully help eradicate homeless code by providing every piece of code with one obvious unquestionable place to reside.

The requirements for Abed’s team’s architecture was that it should kill legacy code, define terminology, be comprehensive without limitations, complement their framework’s design and perform at scale.

The lucid architecture consists of 3 parts

  • Features – Each feature fulfills one business requirement. Features are grouped into domains, and a feature works by running a range of jobs in order. CreateArticleFeature could be a feature name.
  • Jobs – Each job handles one step required to fulfill a feature ie. validation. SaveArticleJob could be a job name. Each job can be used by several different features.
  • Service – A service is a collection of features. Features are net reused between services. The website service and the API service would each have their own CreateArticleFeature. Jobs can be reused, though.

In lucid architecture controllers are VERY slim, each controller serves one feature, and does nothing else, everything from validation to domain object creation/updating and response preparation are handled by different jobs launched by the feature.

I found the idea pretty interesting, especially since it removes some of the overlap of different concepts by providing a specific use case for each domain level. I also like how all logic is handled in specific and clearly separated jobs making it easy to move jobs to queues if necessary. It looks a bit like the direction that we’re currently taking our code base at my job, though we’re not quite so radical in our approach.

An interesting talk to watch if you want some new inspiration regarding architecture.

Gabriela D’Avila

Gabriela talked about some of the changes coming to MySQL in version 5.7. A lot of the talk went a bit over my head since I’m not a database specialist, but it was interesting and gave some good pointers for things to look more into.

MySQL 5.7 enables the NO_ZERO_DATE option by default, which might have implications for our application since we actually use zero dates.

MySQL has a concept of virtual columns, that can calculate values based on the value of other columns, like concatenating a first_name and a last_name column into a full_name. Iirc it can also get attribute values from the new JSON type columns, which would be pretty cool.

Jeroen V.D. Gulik – How to effectively grow a development team

Jeroen gave a talk about developer culture in teams, and how he had built his developer team at Schiphol airport. He talked a lot about what culture is, what developer culture is and how to foster a positive culture, and how that culture is related to developer happiness. He had a lot of good points, too many to note here, and I’d recommend anyone interested in company culture to watch this talk, it’s relevant to anyone from developers through developer managers to higher level managers in technology focused companies.

Adam Wathan – Curing the common loop

The last talk I saw was Adam Wathan’s talk about using loops when programming, versus using higher order functions, which is functions that takes other functions as parameters and/or returns functions. The base of the talk was the 3 things Adam claims to hate:

  • Loops
  • Conditionals
  • Temporary variables

I can see the point about the code getting a lot more readable and I like how the approach requires a radically different thought process compared to how I’d normally do things, I’d definitely recommend any developer to watch this talk.

Leave a Comment

Demagicfying Laravel: Model properties; getters and setters

I mostly enjoy working with Laravel. It provides a range of tools that makes it really fast to create a proof of concept of an idea, allowing you to test out your ideas without spending too much time on ideas before knowing if they are actually worth spending time on.

When you’re starting out with Laravel a lot of what’s going on behind the scene can feel like magic. This is really nice in the sense that you don’t often have to worry about what is actually going on, but on the other hand, it can make things pretty hard to debug, it is not clear what is causing a bug when you’re not sure what is going.

In this post I’ll look into Eloquent models’ object properties, and how Laravel’s Eloquent ORM handles getting and setting property values.

Table of content

Setting up

To have an example to work with, we’ll start by setting up a database table for a simple Todolist.

The Todolist is pretty simple, it has a unique auto-incrementing ID, a string name, a text description, and Eloquent’s default timestamps. The most interesting part of the migration file is the up() method, where we define the table.

Besides the migration, we need to create our Todolist model.

This gives us a basic model class, that we can use to create Todolist objects. The full class looks like this:

That’s pretty much as bare bones as it gets.

Setting object properties

This saves the model object with its fancy new name and description. But how does this happen? How does Laravel know which properties to save, when the properties isn’t even defined on the object? Let’s look at our object:

We see that our data is set in an array named $properties, and not as standard object properties. Let’s look into this.

We start by looking at our Todolist model. This is just an empty class, so nothing happens here. The Todolist class extends the Eloquent Model, so let’s look at that one. In a standard Laravel application, you’ll find it in

Obviously our model specific properties aren’t defined here either, since Laravel can’t predict what we need, so something else is going on.

We can see that Laravel uses magic methods, in this case the __set() magic method.

In PHP the __set() magic method is used as a catch all, that is called when trying to set an inaccessible object property, which means a property that either isn’t defined, or that is inaccessible due to being defined with a protected or private scope.

The method in the Eloquent Model class is defined as:

__set() is passed 2 arguments, $key is the name of the property to be accessed, and $value is the value we’re trying to set on the property.

When calling the code:

$key will have the value ‘name’, and $value will have the value ‘My new list’.

In this case, __set() is only used as a wrapper for the setAttribute()-method, so let’s have a look at that one.

This is an important part of the Laravel setter magic, so lets go through it step by step.

The first thing that happens is a check whether a set mutator method exists for the given property.

In an Eloquent context a set mutator is an object method on the form set[Property]Attribute, so for our name attribute, that would be setNameAttribute(). If a method with that name is defined, Laravel will call it with our value. This makes it possible to define our own setter methods, overriding the standard Laravel behavior.

If no setter method is defined, Laravel checks whether the property name is listed in the class’ $dates array, or if it should be cast to a Date or DateTime object, according to the class’ $casts property. If Laravel determines that the value is a datetime type, it will convert the value into a time string, to make sure it is safe to save the value to the database.

The last check determines whether the value should be encoded as a JSON string in which case it is converted to a json string, to make sure it’s ready to be saved to the database.

As the last thing, the method saves the value, which may or may not have been cast to a different type, into the object’s $attributes properties. This is an array where the object’s current state is saved.

Getting object properties

Now that we have an idea what’s happening when setting properties, let’s look into getting the data back out.

Produces the output:

Just like __set($name, $value) is PHP’s fallback when trying to set a property that doesn’t exist, PHP has a magic get method, called __get($name). This method is called when trying to read the value of a property that hasn’t been defined.

Again, our Todolist class doesn’t have a $name property. Trying to read it will call PHP’s __set() method, which again is defined on the base Eloquent Model class.

Illuminate\Eloquent\Model::__set() itself is just a wrapper for the class’ getAttribute() method.

The last part of the method relates to Eloquent relationships. I might go into this in another post, but I will skip it for this post.

The interesting part of the method is a check for whether the property name exists in the object’s $attributes property. This is where you’ll find it if it’s been set with the default Eloquent property setter, or if it has been loaded from a database.

If the property doesn’t exist in the $attributes array, a check is made to see if a get mutator exists. Like Laravel setters are in the form set[Property]Attribute(), getters are in the form get[Property]Attribute(). Ie. when trying to read our $name property, Laravel will check for the existense of a method called getNameAttribute().

The getAttributeValue() works like a reverse version of the setAttributeValue() discussed earlier. It’s main purpose being to get a stored value, cast it to something useful, and return it. Let’s go through it step by step.

The first thing that happens is that the property’s value is fetched, if the property exists in the $attributes array.

If the class has a property mutator, the property value is run through it, and returned.

If the property name is specified in the $casts array, the property’s value is cast to the specified type and returned.

If the property is listed as a date property in the $dates array, the value is converted to a DateTime object, and returned.

And lastly, if the property shouldn’t be changed in any way, the raw value is returned.


Eloquent uses PHP’s magic __get($name) and __set($name, $value) methods to save and get data on model objects. During this process it provides a couple of ways to manipulate the data.

So far we’ve identified 3 ways to manipulate property values set on and gotten from Eloquent model objects.

  • Accessor and mutator methods
  • The $casts array
  • The $dates array

Accessor and mutator methods

The most flexible way to manipulate Eloquent data on getting and setting is using accessor and mutator methods. These and named on the form get[Property]Attribute() and set[Property]Attribute().


The $casts array

The casts array is an array property where casts are specified for object properties. If no accessor or mutator is defined for a property, and it’s specified in the $casts array Eloquent will handle casting the value.


The $dates array

Since it’s very common to work with dates and times, Eloquent provides a very easy way to specify which properties should be cast as date objects.


By default, the properties will be cast to Carbon objects when getting the property.

Common pitfalls

I’ve seen a couple of common errors developers make when working with Eloquent getters and setters, that cause issues.

  • Defining the object properties
  • Forgetting to set $attributes

Defining the object properties

Many OO programmers prefer to define their object properties in their class files, both to make it instantly visible which properties are available on class objects, and to allow PHP to make various optimizations. But since Laravel’s Eloquent ORM relies on the magic PHP getter and setter methods, defining the class properties will make Eloquent unable to mutate the data, as well as preventing the data from being set in the $attributes array, preventing it from being saved to the database.

Defining the object property like this prevents Eloquent from saving the name attribute to the database.

In this example the ‘name’ key doesn’t exist in the $attributes array, hence it doesn’t exist in the database.

Forgetting so set $attributes

Another common pitfall is to override a mutator method to manipulate the property value, but forgetting to add the data to the $attributes array.

In this example the value will never be saved to the database, and cannot be read using an accessor.

The example will echo an empty string.

Leave a Comment

Front Controller design pattern

A common design patterns used by PHP frameworks, is the front controller. The base idea is simply that your application has a single entry point, which distributes all requests to the proper parts of the system. In php the front controller pattern usually consists of an index.php file.

Front controller advantages

The front controller patterns provides a single place for handling an application’s bootstrap process. This helps keep the bootstrap process similar no matter which part of the system a user requests.

The front controller’s main job is to distribute the request further into the system, this can either be by dispatching a command to update the model directly, or by passing the request to a controller based on a defined route that matches the request.

Front controller alternatives

A lot of older php guides and books recommends having an entry point for each part of the application. For simple websites this could be a way to separate the concerns of each page into its own file. A basic portfolio page could consist of 3 files:

  • index.php – The frontpage
  • portfolio.php – The actual portfolio, with example projects etc.
  • contact.php – A page with contact info, and a contact form.

In this example all functionality relating to completed projects would be in portfolio.php. Similarly, all concerns regarding the contact form would be kept in the contact.php.
This approach has its strength in its simplicity and performance. Any libraries included to send email from the contact form will only be imported on the contact page. Thus it will not slow down the other pages.

The problem with this is that it doesn’t scale well. If you’re building a mildly complex application there will be a large part of redundancy in the bootstrapping process for each page. Eg. if the page has a user system, all parts of the user system will have to be included in every file to do access checks. At this point it makes sense to separate bootstrapping into a dedicated process, that is included in the pages requiring it. This in turn means that the previous benefits starts to become moot.

References and future reading

Leave a Comment

PHP 7 has finally arrived

After about 2 years of work, and a few postponements, PHP 7 has finally been released. I’ve previously written at length about PHP 7 new features and enhancements but the short version is:

  • Improved performance: PHP 7 is up to twice as fast as PHP 5.6
  • Significantly reduced memory usage
  • Many fatal errors converted to Exceptions
  • Secure random number generator
  • Removed old and unsupported SAPIs and extensions
  • Return and Scalar Type Declarations

What about the major projects, are they ready?

Drupal 8 was recently released, and the core should have good enough PHP 7 support for most people, so you can start playing around with that if you are so inclined.

WordPress has been trying to catch up as well, but I’m not too certain about their current status. I would expect the WordPress core to be running on PHP 7 very soon, if it doesn’t already, but I could imagine a lot of plugins having issues.

All maintained Symfony branches has had their test suites passing for a few months, so if your a Symfony developer your main concern should be your own additions.

The Laravel Homestead Vagrant box has also supported PHP 7 for quite a while, so I would also expect Laravel core to be running fine on the new version, even though I havn’t found anything official about it.

Trying out PHP 7

If you want try out PHP 7 before you run out and upgrade all of your live servers (this is a good idea™) there’s a few ways to do that.

You could of course just download it and install it from source.

If you’re running Debian, PHP 7 is already available on the Dotdeb repositories.

As I mentioned it is also available on Homestead if you’re developing with Laravel using Vagrant. Alternatively Rasmus Lerdorfs php7dev Vagrant box also supports a range of PHP versions, including PHP 7.

The puphpet Vagrant box builder is sadly still using the PHP 7 nightlies, but I’m hoping that they will upgrade to the final release soon.

Leave a Comment

New WordPress plugin: URL Builder for Analytics

tl;dr: I’ve built a WordPress plugin for creating Google Analytics tracking URLs for your posts, straight in the post editor. Development of the plugin happens on the GitHub repository

Website Analytics in short

When doing website analytics, it can be really helpful to know which channels site visitors come from. If you use Google Analytics this can be specified easily, as a couple of GET parameters in the site URL. The main issue here is that remembering the names of the parameters can be bothersome, and writing them manually is error prone. That’s why Google supplies a URL builder tool, where you just fill out the fields and it’ll give you an URL with the tracking parameters.

URL builder for Analytics

Since I was doing a bunch of tracking URLs for WordPress posts, I decided it would be helpful to just to it all straight from the WordPress interface. I couldn’t find a proper plugin for it, so I decided to build my own, the URL builder for Analytics.

The plugin is pretty simple. It adds a meta box to the edit page of all post types. The meta box has 2 tabs, social sharing, and custom sharing.

Social Sharing

The Social Sharing tab
The Social Sharing tab

Most of my URL sharing happened on various social channels, so to streamline that I made the social sharing tab, heavily inspired by this linktagger tool (thanks to Søren Sprogøe for showing me that). Using the social sharing you just fill out your campaign name, and the plugin will give you links for various social media sites (Facebook, Twitter, Google+ and LinkedIn). The created links will have it’s source set to the social site’s name, and the medium will be “social”, this ensures consistency for all posts shared to social media.

Custom sharing

The Custom Sharing tabs
The Custom Sharing tabs

The other tab allows you to specify your own tracking values for all five of the parameters Google Analytics allows for it’s tracking URLs. The plugin will then give you a full tracking URL ready to cut and paste into your newsletter, or where you’d want to use it.

Development and Next step

The development of the plugin is done on a GitHub repository, which also hosts the current issues and feature requests queue.

The next thing I’d like to add is support. This would include doing OAuth authentication with the service, allowing the use of the URL shortening service, to shorten the URLs and saving them to your account.

The plugin can be downloaded from the WordPress repository, and bug reports and feature requests are very welcome both in the issues queue, or in the comments below.

Leave a Comment

Levemand – om mad, øl og andre gode ting i livet

Jeg har i nogle måneder efterhånden leget med et sideprojekt jeg kalder Levemand. Navnet var egentlig bare en working title så jeg kunne komme i gang, og det var meningen at det skulle skiftes ud når jeg kom på noget bedre, men nu har det ligesom bidt sig fast, så det bliver nok hængende.

Levemandbloggen handler om nydelse og det gode liv. Der bliver fokus på ting jeg godt kan lide, som god mad, men især om det gode øl. Bloggen er en ment som en sted hvor jeg kan samle mine tanker om forskellige emner jeg er interesseret i, men hvor jeg ikke før har haft et ordentligt sted at samle mine mentale noter til sammenhængende tanker. Samtidig er det en tænkt som en legeplads hvor jeg kan lege med forskellige onlinerelaterede teknologier jeg ellers kun har haft mulighed for at læse om. Nogle af de eksperimenter kommer der nok mere om her på bloggen løbende.

Indtil nu har Levemand hovedsageligt handlet om øl. Det består indtil nu af fire hoveddele:

  • Øltyper hvor jeg samler min research og noter til egentlige beskrivelser af nogle af de mange forskellige øltyper der findes.
  • Bryggerier hvor jeg skriver om forskellige bryggerier jeg synes laver nogle spændende ting, og bryggerier jeg mener er værd at holde øje med.
  • Ølanmeldelser der handler om enkelte øl.
  • Arrangementer, som er den nyeste sektion. Her vil det handle om interessante øl, mad og alkoholrelaterede arrangementer såsom festivaler, smagninger og lignende.

Sammen med en kammerat, er jeg begyndt at besøge en række af Københavns interessante smørrebrødsrestauranter, hvilket er blevet til et par anmeldelser af spisesteder. Det er desuden planen at jeg i løbet af det kommende år vil begynde at skrive om nogle af de mange spændende ølbarer. Der findes rigtig mange spændende ølbarer rundt omkring i landet, men mit fokus vil sandsynligvis være på de Københavnske ølbarer, da det er dem jeg selv kommer på, og derfor dem jeg kender bedst.

Leave a Comment

PHP 7 – What’s up, and what’s new?

Update nov. 9 – Looks like the PHP 7 release date has been postponed due to some outstanding bugs and regressions. It should still be right around the corner, though.

PHP 7 overview

The next major version of PHP, PHP 7, should be right around the corner, with a planned release date on November 12th, according to the current release timetable. PHP 7 will include of a lot of performance improvements, and a range of new features. In thi of post I’ll try to go through the major new stuff.

But, what about PHP 6?

You probably know that the current version is PHP 5.6, and if the next version is PHP 7, obviously PHP 6 was skipped. I won’t go into details with this choice since I’d rather just forget about it.

But in short, first politics happened, then bullshit happened, and then finally they had a vote and moved on.


Since PHP 7 is a major version release, backwards compatibility breaks are allowed. BC breaks are never nice, so the dev team is trying to keep them at a minimum, but some BC breaks should be expected.

Deprecation of PHP 4 style constructors

PHP 4 introduced object oriented programming to PHP. Back in those days an object constructor was a method named like the class you wanted to create an object from.

In PHP 5 the class name constructor was replaced with the new magic method __construct() as the recommended way to create objects.

PHP 4 style constructors is still available in PHP 5, unless you’re using namespaces, in which case they are ignored and only the PHP 5 style constructors are available. This inconsistency can cause some confusion, and this is the main reason that the PHP 4 style constructors will be marked as deprecated in PHP 7 and removed in the following PHP 8. This means that in PHP 7 a class with a PHP 4 style constructor, but no PHP 5 style constructor will cause PHP to emit an E_DEPRECATED error.

Removal of SAPIs and extensions

PHP 7 also removes a bunch of SAPIs (Server APIs – PHP interfaces for module developers to extend the webserver) and extensions. I’ll skip on the SAPIs as I doubt that change will affect me.

Two modules have been removed. The ext/ereg regex module which has been deprecated since PHP 5.3, and the ext/mysql extension, which has been deprecated since PHP 5.5. If you are using the ereg functionality, it is recommended that you switch to using pcre, and if you are still using the mysql functions, you should switch over to using either MySQLi or PDO, more info for choosing which of the two to use can be found in the manual.

Removal of other functionality

Besides the removed SAPIs and extensions, a bunch of other functionality will be removed, most notably probably being some mcrypt, iconv and mbstring functionality. For a full list, refer to the rfc.

PHP 7 Performance

Internally the Zend Engine has been completely refactored and rebuilt, a project known as PHPNG (next generation). This rewrite allows for a lot of improvements, most of which go way over my head (I’m a web developer, not a language developer), but you can find a lot of details on the wiki.

Some of the notable improvements includes improved hash tables and improvements to the internal representation of values and objects, which Nikita Popov has written some interesting articles about: part 1 and part 2.

For PHP developers, the most notable improvements will probably be the drastically improved performance, that Rasmus Lerdorf has been talking about.

Engine Exceptions

Until now when something went really wrong in a PHP script, it would cause either an error, or an exception. An error was an internal PHP things that would either kill the script, or at least output an error message, either to the user, but hopefully just to a log file, depending on your setup. An exception was a part of the script, it would stop the execution and bubble up through the call stack until it hit something designed to catch it, or kill the script if nothing would catch it.

An exception would change the script’s execution flow, but could be mitigated in a properly designed application, while errors were a bit harder to handle on runtime.
In PHP 7 the Exception hierarchy has been extended, and the error types that would previously stop the entire script execution, will be catchable, so the developer is able to handle the errors more gracefully.
To keep backwards compatibility all exceptions still extend the Exception class, while the errors will extend a new Error class. To ensure both errors and exceptions can be handled together they will both implement a new Throwable interface, making it possible to make catch-all solutions.
You can learn more about the new engine exceptions in this rfc, and about the new Throwable interface in this rfc.

Scalar type hints

PHP 5 introduced type hinting, the notion of allowing methods to expect it’s parameters to be of a certain type, and throw a recoverable fatal error if the input didn’t match the type hint. This was possible for objects and arrays.
In PHP 7 it is also possible to type hint scalar types, ie. integers, floats, booleans and strings. If an input parameter doesn’t match the hinted type, the default behaviour will be to convert the value to the type hinted type, but it’s possible to turn on strict mode, in this case a TypeError Exception will be thrown if an input parameter has the wrong value. Strict mode will have to be set for each file where it should be activated.
I’m really looking forward to this change, and will probably be writing a lot of strict files. I’ll probably get more into this in a later post.

Return type hints

Another new thing that I think will be really awesome is return type hints. In the same way that methods can define which parameter types it expects as input, it will also be able to make a promise of which data type it will return. Declared return types will be included in PHP 7, but an rfc extending the concept by adding void return types is currently being voted on.
Another cool extensions would be nullable types, the rfc is currently only a draft, but I’m hoping to see this for PHP 7.1.
Return types is another thing I’m really looking forward to, and will probably write more about later.

Null coalesce

Null coalesce is a variant of the ternary operators, aka shorthand ifs. The new ternary operator ?? will also check if a value is null.

In human terms this means, $category should get the value of $_GET[‘category’] if it is set, otherwise, it should be ‘cakes’.
This could, of course, also be done using the existing ternary operator

This is not a revolutionary addition, but rather some nice syntactic sugar.

Anonymous classes

PHP 5.3 introduced anonymous functions, which is unnamed functions defined during runtime. In PHP 7 the concept is expanded with anonymous classes. I believe their main purpose is for single use objects where stdClass might not be enough, but I’ll have to work with it for real before I’ll figure out their real purpose.

Group use

PHP 5.3 also introduced namespaces, logical separation of classes into groups.
To use classes from other namespaces you can either use their fully qualified name:

or use the required class from it’s namespace

PHP 7 introduces grouped use declarations, so it’s possible to include multiple classes from a namespace in a single use statement. This is done by stuffing the class names in curly braces. It’s still possible to name individual includes

I’ll let you decide whether this is nice syntactical sugar, or another way to build an unreadable mess, since I havn’t fully decided yet.

Uniform variable syntax

The uniform variable syntax tries to make variable syntax more consistent. I don’t have a full overview over all of the changes yet, but the RFC contains a lot of examples. This is a pretty important change to be aware of, since it might actually change the behaviour of, or just break, some of your running code since it changes how a lot of expressions are evaluated.


PHP has always had functionality to generate pseudo-random data, the standard functionality just havn’t been cryptographically secure (aka. random enough). PHP 7 introduces some new functionality for creating data that is even more random.

Further reading about PHP 7

In this post I’ve introduced a bunch of the major changes in PHP 7 that I believe will be the most impactful to m daily life as a developer. There is a lot of more stuff being added and removed, and a full list is available here.

Do you have any change that you look forward to more than the rest of PHP 7? Or do you see anything coming up in one of the following versions that makes everything even cooler? Leave your answer in a comment.

Leave a Comment

Code reviews


I’ve recently taken part in introducing code reviews at my current workplace, so I’d like to spend some time here to tell a bit about why I believe code reviews is a good thing, what I believe code reviews can do for the code and for the development team, how we’re currently doing code reviews and what a I believe a good code reviewer is looking for.

The aim of this post is not to be a checklist of how or what to do at code reviews, but rather to provide a broader perspective on some of the benefits of code reviews, and some broader points about what to look for.

Purpose of the code reviews

At first some people might get the feeling that they are being supervised, or that they aren’t trusted. The point of code reviews isn’t to point out how bad a developer you are, or to point out your flaws, it actually has very little to do with you as a person at all. A code review has two main purposes:

  1. To improve the overall quality of the software
  2. To improve the overall quality of the developers

The first point should be obvious. Better code means less bugs, which means less maintenance. This translates directly into better value for the customer, and more time for us as developers to build new exciting things, instead of maintaining old boring things.

The second point has two sides to it. First, there is a lot to learn from reading other people’s code. They might do stuff in a way that you never thought about, using libraries you didn’t know about. Even on the off chance that you have to do reviews for somebody you could never learn anything from at all, there is still a lot to be learned from condensing your current knowledge into useful actionable feedback.

Before a code review

It is important to prioritise code reviews, as having code reviews lying around waiting for reviews will both be a showstopper for the developer, and hold back the project. At the same time a code review will require you to disturb one of your colleagues, so please respect your colleagues’ time; this means that you should prepare properly as to take as little of your colleague’s time as possible away from his own project.

Be sure to read through your pull request again and make sure it is actually ready for review. Make sure everything is documented as expected, and that any debugging info is removed, so your colleague doesn’t have to waste time rejecting your pull request because of something obvious like that.

Small pull requests are easier to grasp and review, so be sure to only include things that actually needs review in your pull request.

What needs to be reviewed

What needs to be reviewed is up to your organisation and project; the rule at our office is that if you wrote it, it needs to be reviewed. This might seem strict, but it saves everybody the trouble of thinking “is this actually a large enough change to require a code review?” since the answer is always yes. This saves us the trouble of having small bugs go into production because “it was just a minor change”.

Exception: hotfixes

If a critical bug has been found which requires an immediate hotfix, we obviously won’t be able to wait for a code review. In this case the hotfix will be made and deployed to solve the issue, the hotfix will then be code reviewed afterwards and fixed to live up to normal coding standards. This can be done in a more calm and considered fashion since the immediate problem has been fixed.

What doesn’t need to be reviewed

Since the rule is “if you wrote it, it needs a review” it goes that if you didn’t write it, it doesn’t need a review.
We do a lot of sites in Drupal which makes it possible to deploy settings and configurations using a module called features. Features creates a PHP version of your configuration, since this is auto-generated we do not bother code reviewing this.

The same goes for community contributed code, like Composer packages Drupal modules since these are expected to be tested and vetted elsewhere and it is expected that the developer uses his critical sense before including these in a project.

The review

A code review is very similar to any other kind of text review, and should happen on 4 different levels using a top-down approach starting with the architectural thoughts and ending with the actual lines of code.

  • Architectural level: A code review should start at the architectural level, which is the bird’s eye view of the code. Does the structure make sense? Is the code modular? Testable? Performant? Maintainable? Does the chosen design patterns make sense for the problem at hand? Could the code be made more modular or maintainable by using more abstractions? Fewer abstractions? Are the concerns properly separated?
  • Functional level: Jumping down a level we look at the individual functions. Does every function serve one, and only one, clear purpose? Is it obvious what that purpose is from the function name? Is the documentation descriptive? Do functions duplicate functionality from each other, or from built-in functions? Is the function itself readable and understandable?
  • Line level: Does every line have a clear purpose? Are the lines compressed enough to allow you to keep an overview of the methods? Are they compressed to the point of unreadability? Are inline comments included? Are they necessary, or should they be replaced by properly named methods/functions/variables?
  • Word level: At the lowest level we’re looking at the words. Naming is known to be one of the hardest disciplines in computer science, so make sure the names makes sense now, and in 6 months when you come back to maintain the code. Does method names clearly describe what a method does, what it needs and what it’s output should be? Does variables and properties tell you what might be in them?

Post code reviews

After the code review, the pull request is sent back to the developer to fix any issues and it will likely go back and forth a few times with the developer asking questions, fixing issues and questioning feedback.

It’s important to note that the reviewer’s word isn’t set in stone, but should be considered feedback and suggestions. Everything is up for discussion, especially because there is no one right way to structure code. Again, the entire point of code reviews is to improve the quality and maintainability of the software, and learning from each others. Different developers have different strengths and skills, and bringing these together will both improve the software, the people developing it and the team.

Further reading

  • Kevin London Code review best practices – A more in-depth view of the actual code review, and suggestions on what to look for.
  • Smartbear 11 best practices for peer code review – Real life experiences condensed into a set of best practices. Many of them can be directly put into use, like the max size of a code review and how to articulate feedback.
Leave a Comment


What is Composer

Composer is a dependency manager for PHP. It makes it easy to include 3rd party libraries in your PHP project. It’s inspired by npm and aims to bring some of the same ideas to PHP will fixing some of the deficiencies.

But, what about PEAR

It is true. We have had dependency management using PEAR for quite a while. Composer brings some advantages and modernizations to the table:


  • Requires libraries to follow a specific structure.
  • Installs libraries globally, not per project. Requires all libraries to be installed globally on all servers. Causes trouble with different versions of a library on servers running multiple services.


  • De-facto standard (everybody does it).
  • Install per-project, or globally, depending on library and needs.
  • Better version handling when following semver. Automatically handling of allowed versions.
  • Lazy-loading autoloader.
  • Even handles pear packages if required.

Installing Composer

Installing composer is easy, go to for the latest release.
Just download and run the installer:

And the move the created composer.phar file to a bin dir, or any directory in your $PATH.

Composer basics

Using composer in your project only requires 2 files: composer.json and composer.lock.
composer.json is a json formatted file which describes your project. It contains stuff like your project’s dependencies, and how to load your own files, and examples:

This file tells composer 2 things; first is that the project is dependent on the project called monolog/monolog, in the specific version 1.12.0, second that everything in the app namespace should be loaded from the src directory, using a psr-4 autoloader.

To install your dependencies run
composer.phar install
This will make composer look up the package monolog/monolog in it’s default repository, packagist, and try to install the required version. It will then install any requirements the monolog/monolog package might have, as well as any requirements of the required packages etc. it then sets up the required autoloaders making the use of the installed libraries a charm. The last thing it does is to create a composer.lock file, which lists all the installed packages, along with the exact version that has been installed.

Now including your required packages is as easy as including composers autoloader:

Composer best practices

JSON builds upon a very strict syntax, and editing it by hand is error prone, and not recommended. That’s where the composer cli tool saves the day.

Instead of installing new packages by adding them to the composer.json manually, it is recommended to install them using:
composer.phar require {vendor/package[:version]}
The version number is optional, if it isn’t specified composer will find the newest version of the library and create a dependency on that version. Unless you require a specific version of a library, omitting the version number is the recommended practice. Ie. to install the newest version of monolog/monolog, run:
composer.phar require monolog/monolog
At the time of writing, the newest version of monolog/monolog is version 1.16.0, so the above will add the require clause:

When checking versions, ~1.16 is equivalent to “>=1.16,<2.0.0", but more on specifying versions later.

Daily usage

Daily usage of composer mainly consists of 3 commands; composer require as explained above, composer install and composer update.

composer install

Composer install is the most used of the two.
The command first checks if a composer.lock file exists, if it does, it will install all of the exact versions specified in the .lock, if no .lock file exists, it will install the newest versions of the libraries allowed by the package requirements. It will then create it’s required autoloaders. If the composer.lock file did not exist, it is then created, or if new packages has been added that wasn’t in the previous .lock file, it will be updated.

composer update

Composer update basically does the same as composer install, except that it will ignore any existsing composer.lock file, and just install the newest allowed version. It then creates autoloaders and writes a new composer.lock file.

Bulk-updating all of a projects dependencies is bound to cause a world of pain, so it’s recommended to update one dependency at a time, and then running your test suite, to make sure nothing breaks in the update process.

Version control

composer.lock specifies the exact installed versions of all dependencies. This file should be commited to versioning system to make deployment faster and to guarantee that tested versions are installed. The vendor dir is where the actual dependecies are kept. This is a bunch of code that is already hosted on version control systems somewhere else, and should not be included in your version control system.
When deploying your project, running composer install will install the required version the project has been tested with, using the composer.lock file.


Composer allows for a bunch of different ways to specify the allowed versions of each package, by specifying ranges, wildcards etc.

Most of the specifiers are built to support projects using SemVer as a promise of predictability in version numbers, so if you are a library maintainer and not using semver, please do, to make it easier for people to use your library.


>, <, >=, <=, !=, ||, ,, (comma and space, logical AND)


I.e. “2.3.*” == “>=2.3.0 <2.4”


Recommended usage! (For libraries following semver).
Specifies a min. version, and allows updates to next minor/major version, depending on specificity.
I.e. “~3.6” == “>=3.6 <4.0”, “~3.6.0” == “>=3.6.0 <3.7.0”

Advanced usage

So far I’ve went through the basic usage of composer, but it actually does quite a lot. In this last section I’ll do a quick run-through of some nice to know features, but I won’t be going through everything that is possible with the program.

Neat cli options

–verbose – Prints more information when running commands.
–profile – Provides profiling information.
–dry-run – Pretend to do install/update, but don’t actually touch any files. Useful for getting an idea of what has been changed.

Composer repositories

By default composer installs from packagist, but it is possible to install from other sources like github, SitePoint has a nice tutorial on that.

Cli commands

composer.phar show -i – Show all installed libraries and their version.

composer.phar create-project namespace/project
Clones the specified project and runs composer install in the install directory.

If in doubt: composer help

Further study

Some other nice features that I might touch upon in future blog posts includes:

  • Creating your own composer-ready packages
  • Scripts – allows running scripts during various parts of the installation process


Some nice references to know about
The Composer Homepage
Official documentation
Interactive composer cheat-sheet
Composer on

Leave a Comment

2014 and beyond

Efter at have fået opsummeret år 2013 er det tid til at kigge på år 2014 der allerede er godt igang. Inspireret af erfaringerne fra sidste års målsætning har jeg i år valgt at prøve at lave min målsætning på en lidt anden måde. Jeg har sat mig to mål får året, et personligt og et professionelt, indlægget her vil fokusere på det professionelle.

Jeg har længe haft en ambition om at blive selvstændig og kunne leve af egen (egne?) virksomhed(er). Målet for i år er derfor at begynde at tjene penge på egen virksomhed. Målet er ikke at gå selvstændig på fuld tid i år, men bare at bruge en del af min fritid på at bygge et fundament der forhåbentlig gør det muligt over en årrække.

Jeg har et par projekter i støbeskeen som forhåbentlig kan begynde at give en indtjening i løbet af året, plus jeg er stille og roligt begyndt at finde nogle potentielle samarbejdspartnere og kundeemner. Så det handler om at finde en måde at give værdi til disse, så jeg forhåbentlig kan gøre nogle af dem til indkomstkilder.

Målet er ikke nær så skarpt defineret som de målene fra sidste år, men erfaringen siger at det fungerer bedre at sætte de mere konkrete mål indenfor en kortere tidshorisont, da det gør det lettere at tilpasse dem løbende i takt med at virkeligheden ændrer sig.

Leave a Comment