Skip to content →

Category: Internet


Summer is on, June is over, and it’s time for yet another summary of a month passed. This month’s focus will be on PHP and Laravel, with an emphasis on performance, but many of the tricks are also useful if you work with other technologies. There’s also a gem about database encryption.

I’ve had a hard time finding a proper plugin to provide proper syntax highlighting and folding capabilities when working with JavaScript, but Vim-vue seems to meet most of my needs when working with Vue components, finally!

The entire frontend was refactored. This provided a lot of useful insights that was documented in a post on refactoring the frontend. This process also formed the basis for the new Symfony frontend component, a Webpack wrapper and asset manager Webpack Encore which will be introduced in a coming version of Symfony.

StyleCI founder Graham Campbell has an interesting post on the architecture of a Laravel package. Besides the Laravel specifics it also contains some interesting points on advanced composer usage.

Performance is always an interesting topic. Chris Fidao from the awesome Servers for Hackers released a new section dedicated to Laravel Performance. The focus is on low-hanging fruits, and I believe most people will find something they can do right now to improve the performance of their application. Even though the emphasis is on Laravel, most of the tips are around object caching that can be used on any PHP projects, and different database optimisations, that can be used in any project where a database is in use.

Olav van Schie also has a performance focus in his Make your Laravel app fly with PHP OPCache. Again, even though the title says Laravel, OPCache optimisations can help improve performance on any PHP project, and the article gets a bit deeper into how to set up your OPCache settings.

The last article of the month focuses on security. Scott Arciszewski from the Paragon Initiative has a very interesting article about Building searchable encrypted databases. He both talks about the implementations of database encryption, both good and bad, and how to setup your database to make it possible to search your encrypted data in a performant way, without lowering your security.

Leave a Comment


April is almost over, and it’s time for another monthly roundup of interesting articles and links.

During the month I’ve read some interesting articles providing a pretty good spread ranging from introductions to JavaScript tools over best practice for working developers to a deep dive into the PHP engine.

The world of JavaScript tooling and frameworks seems to be an ever moving target, even so, the purpose of these tools is to make the everyday life of developers easier. Npm seems to be something like a current de facto standard for JavaScript package management. Sitepoint posted a beginner’s guide to npm, which gives a bit more background info for people who wants to know a bit more than how to write npm install.

At our company we recently switched to the Webpack module bundler, it’s been working quite well, but it was interesting to read Tutorialzine’s learn Webpack in 15 minutes and gain a few more insights into how it all works.

JavaScript as a language is also a moving target. The latest version is ECMAScript 2015 (aka. ES2015, aka. ES6), if you want to know more about what’s new in this new version, you can Learn ES2015 with Babeljs.

On a more general development note, TechBeacon published an article called 35 bad programming habits that make your code smell. Despite the link batish title, the article contains some good points about bad programming habits worth keeping in mind when doing your development.

Developers often complain about having to maintain other peoples’ code, and sometimes you get the impressions that only greenfield projects are fun to work with. Tobias Schlitt from Quafoo has some very interesting points about the advantages of improving existing code bases in his Loving Legacy Code. I think the article presents some really interesting advantages that are often forgotten when we get too focused on the code itself.

I’ve written a few deployment scripts by myself, but it’s always interesting to learn about other peoples’ experiences, Tim MacDonals has some interesting points in his Writing a Zero Downtime Deployment Script article.

It’s always interesting to know how our tools actually work internally. Even though it’s a bit too low level for me I always find Nikita Popov‘s PHP internals articles interesting, the same goes for the new deep dive into the PHP 7 Virtual Machine.

As a last thing I’d like to share a PHP library that I’d like to play around with; Spatie’s Crawler a PHP library for crawling websites. I could imagine this would work well together with Fabian Potencier’s Goutte web scraper library. I currently use Goutte for a small “secret” side project I call Taplist, a web scraper that scrapes the websites of beer bars in Copenhagen, to collect the lists of what’s currently on tap in the bars in one place.

Leave a Comment


The month of march is ending, so it’s time for this month’s roundup of recommended reads. This month consists of 3 main topics: PHP, JavaScript and project management.

Even though I work a lot with PHP there’s always something new to learn. I’ve previously dived a bit into how Composer works. Earlier in the month my attention was called towards the Composer documentation, namely the part about autoloader optimization, which features hints on how to make the Composer autoloader work faster in your production environment.

I’m working a lot with JavaScript these days, especially with the Vue framework. A lot of stuff is currently happening in the JavaScript space, and I’m struggling to both get updated and keep at least partly on top about what’s going on in that space.

JavaScript ES6, aka. ECMAScript 2015 is the newest version of the ECMAScript standard and comes with a bunch of new features. Sitepoint has a nice article about 10 ES6 native features that you previously needed external libraries to get. They also have a quick rundown of 3 JavaScript libraries to keep your eyes on in 2017, including the before mentioned Vue.

“Awesome”-lists, curated lists of links somebody finds noteworthy, seems to be the big thing on Github these days. So much so that there is even awesome awesomeness meta listsa popping up. Any respectable (and all other) technology seems to have at least one dedicated awesome list, and of course, there is also an awesome Vue list.

When working with a Framework and a related template engine, there is usually a standard way of sharing data between the application and the frontend. But as the physical distance between the front- and the back-end of an application grows, with more functionality moving to the browser, sharing data gets a bit more complex. Jesse Schutt has a nice article about some different strategies for sharing data between Laravel and Vue applications, but despite the name, the strategies are applicable to any application where the back-end and front-end are separated and needs to share data.

On a less technology focused note PHPStan author Ondřej Mirtes wrote a nice article on How PHPStan got to 1000 stars on Github. Despite the rather click baitish title, the article has some really nice points on how to keep your focus on building, launching and growing an open source project and the community around it. Most of the points are very relevant in any development process, not only for open source projects, and they’re definitely worth being aware of.

On the same project building, I stumbled upon an article about the self-limiting habits of entrepreneurs, which is also worth a read for anybody trying to build a project, open source or otherwise.

Leave a Comment


February is coming to a close, and it’s time for a monthly round-up.

Even though February is the shortest month of the year I doubt it will be the least eventful. Especially the security scene has been on fire this month, and I doubt we’ve seen the final debris of this.

The month gave us two major security related findings from Google.

First, they announced the first practical way to create SHA-1 hash collisions, putting the final nail in the coffin for SHA-1 usage in any security relations.

Later in the month, Google’s security research team, Project Zero, announced how Cloudflare’s reverse proxies would, in certain cases return private data from memory, a bug which came to be known as Cloudbleed. The Google researchers worked with Cloudfare to stop the leak, but according to Cloudfare’s incident report, the issue had been open for a while.

On a slightly different note. Laravel is popular PHP framework. Articles online about the framework seems to be about equal amounts of hype, and belittlement. Earlier this month a critical analysis of Laravel were going its rounds in the Twittersphere. I believe it provides a nice description of the pros and cons of Laravel, without falling for neither the hype nor the hatred that is often displayed in framework discussions in general, and Laravel discussions in particular.

As a lead developer, I spend a lot of time thinking about and making decisions on software architecture. So it’s always nice with some inspiration and new ideas. Even though it’s a rather old article by now, I believe Uncle Bob has some nice points when discussion Screaming Architecture, when he points out that the architecture of a piece of software should make it obvious what the software does, rather than which framework it’s built upon.

Developers seem to find incredible performance gains when upgrading to PHP 7, all from Tumblr reporting more than 50% performance improvement to Badoo saving one million dollars per year in saved hosting and server costs. For the nerds out there, PHP core contributor Julien Pauli did a deep dive into the technical side of PHP 7’s performance improvement.

On the topic of performance, I found, a collection of open source performance testing/monitoring tools, that I’d like to look more into.

Want to know more about what’s going on in the PHP community? Here is a nice curated list of PHP podcasts.

Leave a Comment

HTTPS and HTTP/2 with letsencrypt on Debian and nginx


Most web servers today run HTTP/1.1, but HTTP2 support is growing. I’ve written more in-depth about the advantages of HTTP/2. Most browsers which support HTTP2 only supports it over encrypted HTTPS connections. In this article, I’ll go through setting up NGINX to serve pages over HTTPS and HTTP/2. I’ll also talk a bit about tightening your HTTPS setup, to prevent common exploits.


HTTPS security utilises public-key cryptography to provide end-to-end encryption and authentication to connections. The certificates are signed by a certificate authority, which can then verify that the certificate holder is who he claims to be.

Letsencrypt is a free and automated certificate authority who provides free certificate signing, which has historically been a costly affair.

Signing and renewing certificates from Letsencrypt is done using their certbot tool, this tool is available in most package managers which mean it’s easy to install. On Debian Jessie, just install the certbot like anything else from apt:

Using certbot you can start generating new signed certificates for the domains of your hosted websites:

For example

Letsencrypt certificates are valid for 90 days, after which they must be renewed by running

To automate this process, you can add this to your crontab, and make it run daily or weekly or whatever you prefer. In my setup it runs twice daily:

Note that Letsencrypt currently doesn’t support wildcard certificates, so if you’re serving your website from both and (you probably shouldn’t), you need to generate a certificate for each.


NGINX is a free open source asynchronous HTTP server and reverse proxy. It’s pretty easy to get up and running with Letsencrypt and HTTP/2, and it will be the focus of this guide.


To have NGINX serve encrypted date using our previously created Letsencrypt certificate we have to ask it to listen for HTTPS connections on port 443, and tell it where to find our certificates.

Open your site config, usually found in /etc/nginx/sites-available/<site>, here you’ll probably see that NGINX is currently listening for HTTP-connections on port 80:

So to start listening on port 443 as well, we just add another line:

The domain at the end would, of course, be the domain that your server is hosting.

Notice that we’ve added the ssl statement in there as well.

Next, we need to tell NGINX where to look for our new certificates, so it knows how to encrypt the data, we do this by adding

With the default Letsencrypt settings this would translate into something like:

And that’s really all there is to it.


Enabling HTTP/2 support in NGINX is even easier, just add an http2 statement to each listen line in your site config. So:

Turns into:

Now test out your new, slightly more secure, setup:

If all tests pass, restart NGINX to publish your changes.

Now your website should also be available using the https:// scheme… unless the port is blocked in your firewall (that could happen to anybody). If your browser supports HTTP2, your website should also be served over this new protocol.

Improving HTTPS security

With the current setup, we’re running over HTTPS, which is a good start, but a lot of exploits have been discovered in various parts of the implementation, so there are a few more things we can do to harden the security. We do this by adding some additional settings to our NGINX site config.

Firstly, old versions of TLS is insecure, so we should force the server to not revert:

If you need to support older browsers like IE10, you need to turn on older versions of TLS;

This in effect only turns off SSL encryption, it’s not optimal, but sometimes you need to strike a balance, and it’s better than the NGINX default settings. You can see which browsers supports which TLS versions on caniuse.

When establishing a connection over HTTPS the server and the client negotiates which encryption cypher to use. This has been exploited in some cases, like in the BEAST exploit. To lessen the risks, we disable certain old insecure ciphers:

Normally HTTPS certificates are verified by the client contacting the certificate authority. We can turn this up a bit by having the server download the authority’s response, and supply it to the client together with the certificate. This saves the client the roundtrip to the certificate authority, speeding up the process. This is called OCSP stapling and is easily enabled in NGINX:

Enabling HTTPS is all well and good, but if a man-in-the-middle (MitM) attack actually occurs, the perpetrator can decrypt the connection from the server, and relay it to the client over an unencrypted connection. This can’t really be prevented, but it is possible to instruct the client that it should only accept encrypted connections from the domain; this will mitigate the problem anytime the clients visits the domain after the first time. This is called Strict Transport Security:

BE CAREFUL! with adding this last setting, since it will prevent clients from connecting to your site for one full year if you decide to turn off HTTPS or have an error in your setup that causes HTTPS to fail.

Again, we test our setup:

If all tests pass, restart NGINX to publish your changes (again, consider if you’re actually ready to enable Strict Transport Security).

Now, to test the setup. First we run an SSL test, to test the security of our protocol and cipher choices, the changes mentioned here improved this domain to an A+ grade.

We can also check our HTTPS header security. Since I still havn’t set up Content Security Policies and HTTP Public Key Pinning I’m sadly stuck down on a B grade, leaving room for improvement.

Further reading

Leave a Comment

HTTP/2 – What and why?

What is HTTP?

HTTP, HyperText Transfer Protocol, is an application layer network protocol for distributing hypermedia data. It is mainly used for serving websites in HTML format for browser consumption.


The newest version of HTTP is HTTP/2. The protocol originated at Google under the name SPDY. The specification work and maintenance has later been moved to the IETF.

The purpose of HTTP/2 was to develop a better performing protocol, while still maintaining high-level backward compatibility with previous versions of the HTTP protocol. This means compliance to the same HTTP methods, status codes, etc.

New in HTTP/2

HTTP/2 maintains backward compatibility in the way that applications served over the protocol does not require any changes to work for the new version. But the the protocol contains a range of new performance enhancing features that applications can implement on a case-by-case basis.

Header compression

HTTP/2 supports most of the headers supported by earlier versions of HTTP. As something new, HTTP/2 also support compressing these headers to minimize the amount of data that has to be transferred.

Request pipelining

In earlier versions of HTTP, one TCP connection equaled one HTTP connection. In HTTP/2 several HTTP requests can be sent over the same TCP connection.

This allows HTTP/2 to bypass some of the issues in previous version of the protocol, like the maximum connection limit. It also means that all of the formalities of TCP, like handshakes and path MTU discovery, only has to be done once.

No HOL blocking

Head of Line block occurs when some incoming data must be handled in a specific order, and the first data takes a long time, forcing all of the following data to wait for the first data to be processed.

In HTTP/1 HOL blocking happens because multiple requests over the same TCP connection has to be handled in the order they were received. So if a client makes 2 requests to the same server, the second request won’t be handled before the first request is been completed. If the first request takes a long time to complete, this can will hold up the second request as well.

HTTP/2 allows pipelining requests over a single TCP connection, allowing the second request to be received at the same time, or even before, the first request. This means the server is able to handle the second request even while it’s still receiving the first request.

Server Push

In earlier versions, a typical web page was rendered by in a sequential manner. First the browser would request the page to be rendered. The server would then respond with the main content of the page, typically in HTML format. The browser would then start rendering the HTML from the top and down. Every time the browser encountered an external ressource, like a css file, an external JavaScript file, or an image, a new request would be made to the server to request the additional resource.

HTTP/2 offers a feature called server push. Server push allows the server to respond to a single request with several different resources. This allows the server to push required external CSS and JavaScript files to the user on a normal page request, thus allowing the client to have the resources at hand during the rendering, preventing the render-blocking additional requests that otherwise had to be done during page rendering.


The HTTP/2 specification, like earlier versions, allows HTTP to function both over a normal unencrypted connection, but also specifies a method for transferring over a TLS-encrypted connection, namely using HTTPS. The major browser vendors, though, have decided to force higher security standards, by only accepting HTTP/2 over encrypted HTTPS connections.

Leave a Comment

Validating email senders

Email is one of the most heavily used platforms for communication online, and has been for a while.

Most people using email expect that they can see the who sent the email by looking at the sender field in their email client, but in reality the Simple Mail Transfer Protocol (SMTP) that defines how emails are exchanged (as specified in rfc 5321) does not provide any mechanism for validating that the sender is actually who he claims to be.

Not being able to validate the origin of an email has proven to be a really big problem, and provides venues for spammers, scammers, phishers and other bad entities, to pretend to be someone that they are not. A couple of mechanisms has later been designed on top of SMTP to try and add this validation layer, and I’ll try to cover some of the more widely used ones here.

Since we’re talking about online technologies, get ready for abbr galore!

Table of contents:

Sender Policy Framework (SPF)

The Sender Policy Framework (SPF) is a simple system for allowing mail exchangers to validate that a certain host is authorised to send out emails from a specific host domain. SPF builds on the existing Domain Name System (DNS).

SPF requires the domain owner to add a simply TXT record to the domain’s DNS records, which specifies which hosts and/or IPs are allowed to send out mails on behalf of the domain in question.

An SPF record is a simple line in a TXT record on the form:

For example:

When an email receiver receives email the process is:

  1. Lookup the sending domain’s DNS records
  2. Check for an spf record
  3. Compare the spf record with the actual sender
  4. Accept / reject the email based on the output of step 3

For more details check out the SPF introduction or check your domain’s setup with the SPF checker.

DomainKeys Identified Mail (DKIM)

DomainKeys Identified Mail (DKIM) is another mechanism for validating whether the sender of an email is actually allowed to send mails on behalf to the sender. Similarly to SPF, DKIM builds on DNS, but DKIM uses public-key cryptography, similar to TLS and SSL which provides the basis for HTTPS.

In practice DKIM works by having the sender add a header to all emails being send which a hashed and encrypted version of a part of the body, as well as some selected headers. The receiver then reads the header, queries the sending domain’s DNS for the public key to decrypt the header, and checks the validity.

For more details, WikiPedia provides a nice DKIM overview.

Domain Message Authentication Reporting & Conformance (DMARC)

Domain Message Authentication Reporting & Conformance (DMARC) works in collaboration with SPF and DKIM. In it’s simplest form, DMARC provides a rules for how to handle messages that fail their SPF and/or DKIM checks. Like the other two, DMARC is specified using a TXT DNS record, on the form

For instance

This specifies that the record contains DMARC rules (v=DMARC1), that nothing special should be done to emails failing validation (p=none), that any forensic reports should be sent to ( and that any aggregate reports should be sent to (

Putting DMARC in “none”-mode is a way of putting DMARC in investigation mode. In this mode the receiver will not change anything regarding the handling of failing mails, but error reports will be sent to the email adresses specified in the DMARC DNS rules. This allows domain owners to gather data about where emails sent from their domains are coming from, and allows them to make sure all necessary SPF and DKIM settings are aligned properly, before moving DMARC into quarantine mode, where receivers are asked to flag mails failing their checks as spam, or reject mode, where failing emails are rejected straight away.

For a bit more detail, check out the DMARC overview or check the validity of your current setup using the DMARC validator.

Leave a Comment

WordPress-sikkerhed og brute force angreb

Der er i øjeblikket mange der taler om et botnet der på nuværende tidspunkt skulle være i gang med at brute force sig adgang til WordPress sites overalt. Der skulle efter sigende være tale om et angreb fra op til 90.000 forskellige kilder.

Om et sådan angreb er i gang eller ej skal jeg ikke kunne sige, men det har tilsyneladende fået rigtig mange WordPressbrugere til at genoverveje deres nuværende sikkerhedsniveau, så noget positivt er der da kommet ud af det lige meget hvad.

Jeg ser mange der diskuterer problemet og mulige løsninger, hvilket igen er positivt, men der er mange der ikke har helt styr på hvad der er godt og skidt, og evt. hvorfor, så jeg vil her prøve at give mit bud på nogle overvejelser man bør gøre sig inden man hovedløst kaster sig ud i installation af tilfældige sikkerhedsplugins.

En række supportere på det officielle WordPress support forum har sammen lavet en side på WordPress codex med tips og tricks til at sikre sig mod angrebet, så hvis du vil have den korte version på engelsk kan den findes her. Jeg vil gå lidt mere i dybden og forsøge at forklare det hele på dansk.

Hvad sker der egentlig?

For god ordens skyld vil jeg lige starte med at forklare det egentlige problem, så vi allesammen snakker om de samme ting, og for at give en idé om de overvejelser man bør gøre sig.

Botnets? What?

Det omtalte angreb stammer fra et stort botnet. Et botnet er et antal computere der, for det meste uden at ejeren ved det, bliver brugt af tredjepart til andre formål end hvad ejeren af computeren havde til hensigt.

I dette tilfælde handler det efter sigende om at bagmændende bag botnettet prøver at tiltvinge sig kontrol over flere computere, for på den måde at udvide deres botnet. De gør det ved at “brute force” sig administratoradgang til maskiner der kører websites bygget på WordPressplatformen. Hvis det lykkedes dem at få adgang til et WordPress-site gør de så dette site til en del af deres botnet, og på den måde vil angrebet vokse hver gang det lykkedes dem at overtage et WordPress-site.

Hvad så med det “brute forcing”?

Brute forcing handler om at tiltvinge sig adgang ved “brute force”, altså rå magt. Det foregår ved de simpelthen prøver at logge ind som adminbrugeren, ved at prøve mere eller mere tilfældige passwords, i teorien kan man jo gætte alle passwords hvis man har uendelige forsøg og god tålmodighed. Da der er tale om et automatisk angreb er det ikke noget der koster bagmændende noget tid, og de har derfor rigeligt med tålmodighed.

Men hvad kan jeg gøre ved det?

For det første skal du tage dine normale forholdsregler. Sørg ALTID for at holde WordPress fuldt opdateret. Det samme gælder alle installerede plugins og temaer. Dette er ikke specielt for det nuværende angreb, men alt software er usikkert på den ene eller anden måde. Folkene bag WordPress arbejder hårdt for at lukke alle sikkerhedshuller der bliver fundet så hurtigt så muligt, så sørg for altid at være opdateret. Sørg desuden for at slå alle plugins fra du ikke bruger. Som nævnt er intet software 100% sikkert, så en tommelfingerregel er at jo mindre software du har jo bedre.

Mht. det nuværende angreb er der selvfølgelig nogle ekstra ting du kan gøre.

Tag backup af alt!

En backup sikrer dig selvfølgelig ikke imod angreb udefra. Men hvis du skulle være uheldig enten at blive offer for angrebet, eller at få ødelagt dit site i et forsøg på at sikre det er det altså noget rarere at have end up-to-date backup, end at skulle starte helt forfra. Sørg altså for at tage backup af både selvesitet (via. FTP) og databasen (f.eks. via phpmyadmin), gerne lige nu, inden du forsøger at øge din sikkerhed.

Selve backupprocessen er afhængig af hvor du har dit website liggende, så der kan jeg ikke være så behjælpelig, men der findes som sædvanlig en række forslag på WordPress Codex.

Skift navnet på din adminbruger

Det igangværende angreb udnytter tilsyneladende at WordPress i gamle dage automatisk oprettede en administratorbruger med brugernavnet. Fra version 3.0 og frem har det været muligt selv at bestemme dette brugernavn, men det er ikke alle der har ændret det. En ændring af dette brugernavn vil gøre det sværere at brute force sig adgang til dit site, da angriberne så både skal finde brugernavn og password, i stedet for kun password.

Hvis du har en administratorbruger med brugernavnet admin kan det ændres forholdsvist enkelt med et plugin, eller ved et par manuelle skridt:

  1. Login som din administratorbruger
  2. I adminpanelet gå til “Brugere” -> “Tilføj ny”
  3. Opret en ny bruger, sæt brugerens rolle til administrator (med et ordentlig password!)
  4. Log ud.
  5. Log ind med din nyoprettede administratorbruger.
  6. I adminpanelet gå til “Brugere” -> “Alle brugere”.
  7. Hold musen hen over admin-brugeren, og tryk på det “slet”-link der kommer frem.

Hvis du oplever at du ikke kan slette admin-brugeren så dobbeltcheck lige at du har logget ind med din nyoprettede bruger, og ikke stadig er logget ind som adminbrugere (ja, vi kan alle lave trykfejl :-)).

Brug ordentlige passwords

Dette er vel nok det vigtigste punkt på listen, og desværre nok også det mest besværlige at gøre ordentligt. Sørg altid for at bruge ordentlige passwords der ikke er lige til at gætte. Som sagt foregår det nuværende angreb som et brute force angreb, hvor de prøver sig at gætte sig til dit password, dette vil normalt foregå på en af to måder.

Enten kan man prøve fra en ende af, med “a” som password, derefter “b”, når alle muligheder er opbrugt prøver man så med to bogstaver, “aa”. Dette kan selvsagt tage lang tid, men hvis dit password f.eks. er “asdf” tager det altså ikke lang tid.

En anden metode er at tage udgangspunkt i en ordbog. I stedet for at prøve alle kombinationer af tal og bogstaver nøjes man med ord fra en prædefineret liste af ord. Disse liste vil ofte også indeholde ofte brugte afarter af ord, hvor f.eks bogstaver er udskiftet med tal. Dette betyder at hverken “password” eller “p4ssw0rd” er særlig gode passwords.

Der er skrevet rigtig, rigtig, rigtig, meget om hvordan man gør sine passwords sikre. WordPress Codex nævner også nogle gode forholdsregler man kan forholde sig til. Der henvises desuden til nogle services der kan hjælpe med at generere og huske gode passwords. De væsentligste ting må være:

  • Lad være med at bruge personlige oplysninger, såsom dit navn, din fødselsdag eller lignende offentlige oplysninger.
  • Lad være med at bruge ord fra ordbogen.
  • Længere passwords er generelt bedre passwords.
  • Brug en blanding af store og små bogstaver, tal og tegn

Men husk altid på de ovenstående metoder der bruges til at gætte passwords. 5up3rM4nRul3z43v3r (superman rules forever) er altså ikke nødvendigvis et godt password, da det stadig er en sammensætning af ord der garanteret vil stå i enhver password breaking ordbog.
Gode passwords vil oftest (altid?) være svære at huske. En metode til at gøre det lettere er at bruge en passwordmanager, et program eller en service til at huske dine passwords for dig. Jeg bruger selv LastPass som gør at jeg har et meget stærkt password som jeg skal huske. Dette masterpassword giver så adgang til alle mine passwords til diverse websites. Det betyder at jeg kan bruge services til at generere tilfældige passwords (eller, ihvertfald tilfældige nok), uden selv at skulle huske alle disse passwords, eller nedskrive dem på papirer som jeg kan smide væk.

Bloker adgang til din loginside

Da det igangværende angreb forsøger at få adgang administratorinterfacet på WordPress sites via den normale loginmetode, er en mulighed for øget beskyttelse at begrænse adgangen til adminpanelet. De to mest brugte metoder til dette er:

  • Password-beskyt adminpanelet
  • Bloker adgang til filen på IP-niveau

Fælles for de to metoder er at de kan fungere på et af to niveauer. Det kan enten gøres via. WordPress selv, eller det kan gøres direkte på serverniveau. Om man bruger den ene eller anden metode kan umiddelbart virke ligegyldigt, men jeg vil gerne lige understrege hvorfor det gør en forskel.

WordPress er skrevet i PHP, som bl.a. bruges til at skabe dynamiske hjemmesider. Det betyder at hver gang sikkerheden bliver håndteret i WordPress skal serveren have hele WordPress i gang med at arbejde for først at finde ud af om en bruger skal have adgang til en ressource. Samme sikkerhed kan ofte klares på serverniveau, hvilket betyder meget mindre arbejde for serveren, da en evt. angriber vil blive forment adgang allerede inden WordPress bliver sat i gang.

Password-beskyt adminpanelet

Passwordbeskyttelse kan som nævnt foregå både på server- og på WordPressniveau. Passwordbeskyttelse på WordPressniveau foregår via. WordPress eget loginsystem, det er en beskyttelse vi snakkede om tidligere med ændring af adminbrugernavn, og brug af gode passwords.

Passwordbeskyttelsen kan dog også flyttes ned på serverniveau med HTTP authentication. Dette vil resultere i at når du tilgår din WordPress loginside vil du først møde en popup hvor du skal skrive brugernavn og password, og derefter vil du møde den normale WordPress loginside. Dette er selvfølgelig en afvejning af sikkerhed vs. convenience da det kan virke lidt irriterende at skulle logge ind to gange. Hvis du vælger denne metode skal du selvfølgelig huske ikke at bruge samme brugernavn/passwordkombination til begge lag af beskyttelse. WordPress Codex har selvfølgelig også instruktioner til hvordan du opsætter HTTP authentication, både på Apache og Nginx servere.

Bloker adgangen på IP-niveau

En anden metode er at blokere adgangen til WordPress-loginformen ud fra forespørgerens IP-adresse. Hvis du ved at du har en fast IP-adresse og at du vil have den samme adresse lige så længe som du har din WordPressside kan du selvfølgelig lave en opsætning der kun giver den ene IP-adresse adgang til din loginside, men det er de færreste forundt (og nu er ikke tidspunktet til en IPv6-diskussion).

Et alternativ, som vil være mere relevant for de fleste, er kun at tillade en IP-adresse X antal loginforsøg, før denne blokeres. Dette kan gøres på WordPressniveau vha. et plugin som WordFence (tak til Kasper Bergholt for anbefaling af pluginet, der tilsyneladende også kan mange andre lækre ting), men igen har vi problemet med at hele WordPresspakken skal startes op, før der kan tages stilling til om en bruger skal have adgang eller ej. Dette er dog noget mere besværligt, og nok ikke noget alle og enhver bare bør kaste sig ud i, men en guide til opsætning af af rate-limiting på Apache kan findes her. Hvis man vil opsætte rate limiting i .htaccess, kræver det dog at man kører Apache med mod_security 2.7.3 eller senere.

Et problem med denne tilgang til det igangværende angreb er at angrebet som forklaret udføres af et botnet der arbejder fra mere end 90.000 forskellige IP-adresser (med mulighed for udvidelse, hvis de får succes). Det betyder at de potentielt kan fortsætte angrebet fra en ny IP, hvis en af adresserne bliver spærret. Dog må der altid være en form for ressource cost tilknyttet hvis angrebet skal flyttes til en ny afsender, da der skal holdes styr på hvor langt hver angriber er nået i ordbogen. Så jo flere lag af sikkerhed du kan opsætte, jo sværere bliver det for angriberen.


Så for at opsummere hvad jeg mener du bør gøre for at højne din WordPress-sikkerhed generelt, og imod det måske igangværende angreb er:

  1. Hold WordPress opdateret, inkl.
    • WordPress core
    • Alle installerede plugins
    • Alle installerede temaer
  2. Tag backup. Om ikke andet for at gøre det lettere at komme tilbage på benene hvis uheldet skulle være ude.
  3. Lad være med at have en adminbruger med navnet admin.
  4. Brug ordentlige passwords. Brug evt. en passwordmanager.
  5. Passwordbeskyt din loginside.
  6. Bloker adgangen for IP-adresser der prøver at gætte dit password.

Hvis i har andre gode råd til beskyttelse af WordPress hører jeg selvfølgelig også gerne om det i kommentarfeltet nedenfor, så vi kan få spredt ordet om ordentlig sikkerhed (At lade være med at bruge WordPress godtages ikke som et godt råd, så hvis det er din mission, så spar dig selv besværet og lad være med at skrive det her).

Leave a Comment

Speciale aka. næste store projekt

Jeg har i tidens løb nævnt en række projekter af forskellig størrelse, som jeg har arbejdet på. Nu er det tid til endnu et, og det er et større et af slagsen, specialet på mit studie på DTU, og dermed afslutningen på mit studie til civilingeniør.

Jeg skal arbejde med en idé jeg har haft igennem længere tid, men som jeg aldrig er kommet i gang med, men nu skal det være. At jeg kan få lov til at arbejde med et projekt som jeg synes er så interessant er jo kun en bonus.

Den grundlæggende ide

Den grundlæggende idé er rimelig simpel, så simpel at det egentlig undrer mig at jeg ikke kan finde andre der har gjort det endnu, ihvertfald ikke i København.

Der er mange muligheder for at gå ud i København, her tænker jeg på barer, værtshuse, spillesteder osv. Faktisk er der så mange at det kan være svært at vælge hvilket sted man skal vælge på et givent tidspunkt. Samtidig er der tilpas mange til at det er svært at have et overblik over de muligheder man har når man gerne vil ud. Det bliver ikke lettere hvis man på et tidspunkt befinder sig et sted i byen man måske ikke kender så godt, og hvor man måske ikke lige ved hvor man bør gå hen (*host* Amager).

Den grundlæggende idé er så en simpel mulighed for at søge efter steder i nærheden af hvor man befinder sig. De fleste har efterhånden adgang til en smartphone når de går ud. Disse indeholder både adgang til internettet og lokationsmuligheder, så de teknologiske krav er som oftest opfyldt, der mangle bare servicen.

Dette kan man delvist klare allerede med eksisterende services som AOK eller Yelp hvis man kan få filtreret spisesteder og shopping fra. Det nu hedengangne MitKBH kunne indtil for nylig også delvist tilbyde noget lignende, ligesom Carlsberg med deres nye CrowdIt vist nok også prøver.

Hvor er underholdningen?

Nogen gange vil man dog lidt mere end bare at finde det nærmeste sted, da man forskellen mellem en god og en dårlig bar kan være så simpel som en 30m gåtur. Derfor kan det være relevant at kigge på lidt mere end bare afstanden.

Den første udvidelse til den grundlæggende idé var derfor at udvide søgningen til at omfatte flere af barernes tilbud. Det kunne være noget så simpelt som at finde en ryger eller ikke-rygerbar, eller f.eks. det kunne være muligheden for at søge efter barer der tilbyder

  • Poolbord
  • Billard
  • Dart
  • Whiskeykort
  • Specialøl
  • Osv.

Mulighederne er mange da vi hver især har forskellige formål med at tage ud. Teknisk er der dog stadig tale om en rimelig simpel søgefunktion, næppe noget der har specialepotentiale for en IT-studerende. Det skal være vildere!

Det mere avancerede

En lokation er bare en af de mange ting en smartphone kan fortælle om brugeren. En smartphone er på mange måder en bærbar PC i klassisk forstand, altså en Personlig Computer (frit oversat). Den vil derfor ofte have kendskab til mange flere oplysninger der kan gøre et søgeresultat mere interessant for brugeren, og det er her det egentlige specialepotentiale kommer ind i billedet. Med adgang til flere oplysninger om brugeren bliver det muligt at tilbyde en mere personaliseret service, og der er generelt mange data der kan bruges til at forbedre oplevelsen.

Dine vaner kan over tid fortælle noget om hvilke steder, eller hvilken type steder du normalt vil vælge. Dette kan bruges til at anbefale nye steder enten i samme genre, eller steder som andre med vaner i stil med dine anbefaler.

En anden mulighed er at kigge på Facebook. Hvilke steder anbefaler dine venner, disse kunne måske have interesse for dig også, især hvis det er venner du i forvejen er meget sammen med, eller ofte tager i byen med.

Selv noget så simpelt som tidspunktet kan være interessant at kigge på. Der er næppe nogen grund til at anbefale en bar der lukker en halv time senere, hvis det i forvejen tager ti minutter at gå derhen. På den anden side kunne det være interessant hvis dine venner har offentliggjort, f.eks. med et Facebook check-in, at de er et sted i nærheden af dig.

Selve specialet

Jeg har efterhånden diskuteret idéen med en del mennekser (mange tak for inputs, alle sammen!) og det er der kommet rigtig mange spændende idéer ud af, også mange flere end opsummeret ovenfor, så jeg tror på at der er rigeligt med muligheder for at udvide konceptet, og hvilke idéer og tilgangsvinkler jeg ender med at vælge er jeg stadig ikke helt sikker på, men jeg tror der er potentiale.

Leave a Comment

Tillykke med de 100.000, Ubuntudanmark!

Den 31. Oktober 2006 kl 21:06 blev det første indlæg posted på det danske Ubuntu supportforum. Det var den spæde start på et forum der siden har udviklet sig meget.

Forummet har pt. over 5000 brugere, og i dag nåede de endnu en milepæl. Der er dags dato blevet skrevet ikke mindre end 100.000 indlæg! Det må jo betyde at de har gjort et eller andet rigtigt.

Tillykke til de mange aktive brugere, og ikke mindst de dygtige supportere! Godt gået, og held og lykke med de næste 100.000! 🙂

Leave a Comment