Skip to content →

Category: Sikkerhed

July-2017.php

Even though July was a pretty quiet month online due to everybody being on vacation, I still came across some interesting articles.

Earlier in the year, there was a big discussion in the PHP community after some people suggested that pretty much everything in your code was cruft, and that code should be as concise as possible. One argument for being the opposite is to type-hint all the things which present an argument for how type hinting reduces the cognitive load of reading code.

In security training, Brute Logic presents The 7 main XSS cases everyone should know.

We’ve recently moved our entire front-end build process at work to WebPack. This has very much been a black box to me, so I was happy to see WebPack core contributor Sean Larkin announce his new online course WebPack the core concepts.

Of course, there’s also something Laravel related. First a tweet by Mohammed Said, subtlely announcing a nice new way for queue jobs in Laravel to determine whether they should actually go to the queue.

Matt Stauffer posed the question What packages do you install on every Laravel application you create?. The result is a list of interesting packages I’d recommend looking through. Most of them are not Laravel specific, so other PHP developers might find something interesting as well.

Leave a Comment

June-2017.php

Summer is on, June is over, and it’s time for yet another summary of a month passed. This month’s focus will be on PHP and Laravel, with an emphasis on performance, but many of the tricks are also useful if you work with other technologies. There’s also a gem about database encryption.

I’ve had a hard time finding a proper plugin to provide proper syntax highlighting and folding capabilities when working with JavaScript, but Vim-vue seems to meet most of my needs when working with Vue components, finally!

The entire symfony.com frontend was refactored. This provided a lot of useful insights that was documented in a post on refactoring the symfony.com frontend. This process also formed the basis for the new Symfony frontend component, a Webpack wrapper and asset manager Webpack Encore which will be introduced in a coming version of Symfony.

StyleCI founder Graham Campbell has an interesting post on the architecture of a Laravel package. Besides the Laravel specifics it also contains some interesting points on advanced composer usage.

Performance is always an interesting topic. Chris Fidao from the awesome Servers for Hackers released a new section dedicated to Laravel Performance. The focus is on low-hanging fruits, and I believe most people will find something they can do right now to improve the performance of their application. Even though the emphasis is on Laravel, most of the tips are around object caching that can be used on any PHP projects, and different database optimisations, that can be used in any project where a database is in use.

Olav van Schie also has a performance focus in his Make your Laravel app fly with PHP OPCache. Again, even though the title says Laravel, OPCache optimisations can help improve performance on any PHP project, and the article gets a bit deeper into how to set up your OPCache settings.

The last article of the month focuses on security. Scott Arciszewski from the Paragon Initiative has a very interesting article about Building searchable encrypted databases. He both talks about the implementations of database encryption, both good and bad, and how to setup your database to make it possible to search your encrypted data in a performant way, without lowering your security.

Leave a Comment

February-2017.php

February is coming to a close, and it’s time for a monthly round-up.

Even though February is the shortest month of the year I doubt it will be the least eventful. Especially the security scene has been on fire this month, and I doubt we’ve seen the final debris of this.

The month gave us two major security related findings from Google.

First, they announced the first practical way to create SHA-1 hash collisions, putting the final nail in the coffin for SHA-1 usage in any security relations.

Later in the month, Google’s security research team, Project Zero, announced how Cloudflare’s reverse proxies would, in certain cases return private data from memory, a bug which came to be known as Cloudbleed. The Google researchers worked with Cloudfare to stop the leak, but according to Cloudfare’s incident report, the issue had been open for a while.

On a slightly different note. Laravel is popular PHP framework. Articles online about the framework seems to be about equal amounts of hype, and belittlement. Earlier this month a critical analysis of Laravel were going its rounds in the Twittersphere. I believe it provides a nice description of the pros and cons of Laravel, without falling for neither the hype nor the hatred that is often displayed in framework discussions in general, and Laravel discussions in particular.

As a lead developer, I spend a lot of time thinking about and making decisions on software architecture. So it’s always nice with some inspiration and new ideas. Even though it’s a rather old article by now, I believe Uncle Bob has some nice points when discussion Screaming Architecture, when he points out that the architecture of a piece of software should make it obvious what the software does, rather than which framework it’s built upon.

Developers seem to find incredible performance gains when upgrading to PHP 7, all from Tumblr reporting more than 50% performance improvement to Badoo saving one million dollars per year in saved hosting and server costs. For the nerds out there, PHP core contributor Julien Pauli did a deep dive into the technical side of PHP 7’s performance improvement.

On the topic of performance, I found Sitespeed.io, a collection of open source performance testing/monitoring tools, that I’d like to look more into.

Want to know more about what’s going on in the PHP community? Here is a nice curated list of PHP podcasts.

Leave a Comment

HTTPS and HTTP/2 with letsencrypt on Debian and nginx

Introduction

Most web servers today run HTTP/1.1, but HTTP2 support is growing. I’ve written more in-depth about the advantages of HTTP/2. Most browsers which support HTTP2 only supports it over encrypted HTTPS connections. In this article, I’ll go through setting up NGINX to serve pages over HTTPS and HTTP/2. I’ll also talk a bit about tightening your HTTPS setup, to prevent common exploits.

Letsencrypt

HTTPS security utilises public-key cryptography to provide end-to-end encryption and authentication to connections. The certificates are signed by a certificate authority, which can then verify that the certificate holder is who he claims to be.

Letsencrypt is a free and automated certificate authority who provides free certificate signing, which has historically been a costly affair.

Signing and renewing certificates from Letsencrypt is done using their certbot tool, this tool is available in most package managers which mean it’s easy to install. On Debian Jessie, just install the certbot like anything else from apt:

Using certbot you can start generating new signed certificates for the domains of your hosted websites:

For example

Letsencrypt certificates are valid for 90 days, after which they must be renewed by running

To automate this process, you can add this to your crontab, and make it run daily or weekly or whatever you prefer. In my setup it runs twice daily:

Note that Letsencrypt currently doesn’t support wildcard certificates, so if you’re serving your website from both example.com and www.example.com (you probably shouldn’t), you need to generate a certificate for each.

NGINX

NGINX is a free open source asynchronous HTTP server and reverse proxy. It’s pretty easy to get up and running with Letsencrypt and HTTP/2, and it will be the focus of this guide.

HTTPS

To have NGINX serve encrypted date using our previously created Letsencrypt certificate we have to ask it to listen for HTTPS connections on port 443, and tell it where to find our certificates.

Open your site config, usually found in /etc/nginx/sites-available/<site>, here you’ll probably see that NGINX is currently listening for HTTP-connections on port 80:

So to start listening on port 443 as well, we just add another line:

The domain at the end would, of course, be the domain that your server is hosting.

Notice that we’ve added the ssl statement in there as well.

Next, we need to tell NGINX where to look for our new certificates, so it knows how to encrypt the data, we do this by adding

With the default Letsencrypt settings this would translate into something like:

And that’s really all there is to it.

HTTP/2

Enabling HTTP/2 support in NGINX is even easier, just add an http2 statement to each listen line in your site config. So:

Turns into:

Now test out your new, slightly more secure, setup:

If all tests pass, restart NGINX to publish your changes.

Now your website should also be available using the https:// scheme… unless the port is blocked in your firewall (that could happen to anybody). If your browser supports HTTP2, your website should also be served over this new protocol.

Improving HTTPS security

With the current setup, we’re running over HTTPS, which is a good start, but a lot of exploits have been discovered in various parts of the implementation, so there are a few more things we can do to harden the security. We do this by adding some additional settings to our NGINX site config.

Firstly, old versions of TLS is insecure, so we should force the server to not revert:

If you need to support older browsers like IE10, you need to turn on older versions of TLS;

This in effect only turns off SSL encryption, it’s not optimal, but sometimes you need to strike a balance, and it’s better than the NGINX default settings. You can see which browsers supports which TLS versions on caniuse.

When establishing a connection over HTTPS the server and the client negotiates which encryption cypher to use. This has been exploited in some cases, like in the BEAST exploit. To lessen the risks, we disable certain old insecure ciphers:

Normally HTTPS certificates are verified by the client contacting the certificate authority. We can turn this up a bit by having the server download the authority’s response, and supply it to the client together with the certificate. This saves the client the roundtrip to the certificate authority, speeding up the process. This is called OCSP stapling and is easily enabled in NGINX:

Enabling HTTPS is all well and good, but if a man-in-the-middle (MitM) attack actually occurs, the perpetrator can decrypt the connection from the server, and relay it to the client over an unencrypted connection. This can’t really be prevented, but it is possible to instruct the client that it should only accept encrypted connections from the domain; this will mitigate the problem anytime the clients visits the domain after the first time. This is called Strict Transport Security:

BE CAREFUL! with adding this last setting, since it will prevent clients from connecting to your site for one full year if you decide to turn off HTTPS or have an error in your setup that causes HTTPS to fail.

Again, we test our setup:

If all tests pass, restart NGINX to publish your changes (again, consider if you’re actually ready to enable Strict Transport Security).

Now, to test the setup. First we run an SSL test, to test the security of our protocol and cipher choices, the changes mentioned here improved this domain to an A+ grade.

We can also check our HTTPS header security. Since I still havn’t set up Content Security Policies and HTTP Public Key Pinning I’m sadly stuck down on a B grade, leaving room for improvement.

Further reading

Leave a Comment

Validating email senders

Email is one of the most heavily used platforms for communication online, and has been for a while.

Most people using email expect that they can see the who sent the email by looking at the sender field in their email client, but in reality the Simple Mail Transfer Protocol (SMTP) that defines how emails are exchanged (as specified in rfc 5321) does not provide any mechanism for validating that the sender is actually who he claims to be.

Not being able to validate the origin of an email has proven to be a really big problem, and provides venues for spammers, scammers, phishers and other bad entities, to pretend to be someone that they are not. A couple of mechanisms has later been designed on top of SMTP to try and add this validation layer, and I’ll try to cover some of the more widely used ones here.

Since we’re talking about online technologies, get ready for abbr galore!

Table of contents:

Sender Policy Framework (SPF)

The Sender Policy Framework (SPF) is a simple system for allowing mail exchangers to validate that a certain host is authorised to send out emails from a specific host domain. SPF builds on the existing Domain Name System (DNS).

SPF requires the domain owner to add a simply TXT record to the domain’s DNS records, which specifies which hosts and/or IPs are allowed to send out mails on behalf of the domain in question.

An SPF record is a simple line in a TXT record on the form:

For example:

When an email receiver receives email the process is:

  1. Lookup the sending domain’s DNS records
  2. Check for an spf record
  3. Compare the spf record with the actual sender
  4. Accept / reject the email based on the output of step 3

For more details check out the SPF introduction or check your domain’s setup with the SPF checker.

DomainKeys Identified Mail (DKIM)

DomainKeys Identified Mail (DKIM) is another mechanism for validating whether the sender of an email is actually allowed to send mails on behalf to the sender. Similarly to SPF, DKIM builds on DNS, but DKIM uses public-key cryptography, similar to TLS and SSL which provides the basis for HTTPS.

In practice DKIM works by having the sender add a header to all emails being send which a hashed and encrypted version of a part of the body, as well as some selected headers. The receiver then reads the header, queries the sending domain’s DNS for the public key to decrypt the header, and checks the validity.

For more details, WikiPedia provides a nice DKIM overview.

Domain Message Authentication Reporting & Conformance (DMARC)

Domain Message Authentication Reporting & Conformance (DMARC) works in collaboration with SPF and DKIM. In it’s simplest form, DMARC provides a rules for how to handle messages that fail their SPF and/or DKIM checks. Like the other two, DMARC is specified using a TXT DNS record, on the form

For instance

This specifies that the record contains DMARC rules (v=DMARC1), that nothing special should be done to emails failing validation (p=none), that any forensic reports should be sent to email@example.com (ruf=email@example.com) and that any aggregate reports should be sent to email@example.com (rua=email@example.com).

Putting DMARC in “none”-mode is a way of putting DMARC in investigation mode. In this mode the receiver will not change anything regarding the handling of failing mails, but error reports will be sent to the email adresses specified in the DMARC DNS rules. This allows domain owners to gather data about where emails sent from their domains are coming from, and allows them to make sure all necessary SPF and DKIM settings are aligned properly, before moving DMARC into quarantine mode, where receivers are asked to flag mails failing their checks as spam, or reject mode, where failing emails are rejected straight away.

For a bit more detail, check out the DMARC overview or check the validity of your current setup using the DMARC validator.

Leave a Comment

Netværkssikkerhed – Definitioner og basics

Jeg deltager i øjeblikket i et kursus kaldet Operating Systems for Network Security. Jeg tænkte at jeg løbende ville dele en del af mine noter her på bloggen, det kunne jo være andre kunne finde noget inspiration i dem.

Definitioner

Først ligger vi ud med nogle definitioner. Der er her tale om de grundlæggende elementer vi har at arbejde med når et netværks skal beskyttes.

Border Routers

Formålet med en router er at fordele trafikken på et netværk, og sørge for at alle pakker kommer det rigtige sted hed. En border router er den sidste router der adskiller dit netværk, fra et netværk du ikke stoler på, f.eks. internettet.

Firewalls

Mens routeren sørger for at sende alt trafik i retning af rette modtager, sørger en firewall for at filtrere datapakkerne, så kun de ønskede data for lov at komme ind i, eller ud fra, et netværk.

Intrusion Detection systems (IDSs)

Et Intrusion Detection System (IDS) fungerer som en tyverialarm, der giver besked hvis der opdages forsøg på utilsigtet indtrængen. Der findes to slags IDS, Network-based (NIDS) der er tilknyttet en firewall og overvåger trafikken på netværket, samt Host-based (HIDS) der er tilknyttet den enkelte maskine, og overvåger trafikken til maskinen.

Intrusion Prevention systems (IPSs)

Mens et IDS vil forsøge at opdage forsøg på indtrængen og advare en administrator, vil et IPS selv forsøge at bremse forsøg på indtrængen.

Virtual Private Networks (VPNs)

Et VPN er en krypteret forbindelse over en usikker kanal såsom internettet. Dette kan f.eks. være nyttigt hvis man har brugere der skal have adgang til interne data såsom mailservere, fra eksterne placeringer, f.eks hjemmefra. Et VPN vil dog kun beskytte selve forbindelse, hvis en klient derfor er kompromiteret, kan det være muligt at trænge ind i et privat netværk via en VPN-forbindelse opsat af den kompromiterede klient.

Software Architecture

Softwaresikkerhed er også en vigtig ting at tage højde for. Hvis en webshop f.eks benytter sig af e-commerce software med sikkerhedshuller i, kan det være muligt for ondsindede folk at få adgang til interne data ved at udnytte disse huller, da der her ofte vil være tale om en forbindelse mellem intern software, og en database, vil det normalt kunne foregå uden hverken IDS eller IPS opdager det.

De-Militarized Zones (DMZs) and Screened Subnets

En De-Militarized Zone (DMZ) er et område på et netværk der ikke er beskyttet af en firewall. Det vil altså ofte være området fra en border router, til en firewall.
Et Screened subnet er en del af netværket bag en firewall, som stadig kan tilgåes fra internettet. Screened subnets kan f.eks bruges til at adskille web- og mailservers, der skal kunne tilgås fra internettet via bestemte protokoller, fra brugernes arbejdsmaskiner der skal kunne tilgå internettet, men som ikke skal kunne tilgås udefra.

Defense in Depth

En god sikkerhedsmodel fungerer som et løg. Det består af flere uafhængige lag, der til sammen gerne skulle beskytte de systemer du prøver at beskytte. Det er vigtigt at have styr på alle sine beskyttelseslag, da ingenting, som bekendt, er perfekt. Skulle det lykkedes uvedkommende at slippe igennem det yderste lag vil det derfor være rart at næste lag kan tage sig af problemet. Vi arbejder her med 3 komponenter.

  • The Perimeter
  • The Network
  • The Human factor

Disse 3 vil blive gennemgået hver for sig.

The Perimeter

Når man snakker om netværkssikkerhed tænker folk oftest på perimeteren. Dette er det yderste lag af netværket der adskiller dit netværk fra det store stygge internet. Når vi snakker sikkerhed i dette lag snakker vi altså ting som:

  • Static Packet filters
  • Stateful firewalls
  • Proxy firewalls
  • IDS & IPS
  • VPN devices

The Network

Her tales der selvfølgelig om dit interne netværk. Det vil sige både servere, klienter, og netværkshardware. Netværkslaget kan altså med fordel deles op i to mindre dele, netværkshardwaren, og de systemer du prøver at beskytte.

Netværket

På selve netværket kan sikkerheden bestå af ting som:

  • Ingress & Egress filtre
  • Interne firewalls til at opdele netværket i mindre dele
  • IDS-systemer til at holde øje med om nogen skulle have held til at bryde igennem perimetren

Interne systemer

På de interne systemer kan sikkerheden bestå af ting som

  • Personlige (host-centric) firewalls
  • Antivirus software
  • Patching af sikkerhedshuller i OS og andet software
  • Forvaltning af opsætning – procedurer til at sørge for at både software og antivirus definitioner er opdateret
  • Revision

En stor del af denne sikkerhed kan altså delvist automatiseres, såsom udrulning af softwareopdateringer, andet kræver dog en del manuelt arbejde, såsom den løbende revision af sikkerhedsprocedurer.

The Human factor

En stor fejl der ofte begås når IT-sikkerhed planlægges er at man glemmer den menneskelige faktor, og kun tænker på hardware- og softwaresikkerhed. Det er dog vigtigt at alle brugere sættes ind i organisationens sikkerhedspolitik, og ved hvad der forventes af den enkelte, og ikke mindst hvorfor. Hvis en medarbejder f.eks har sin maskine med hjem, og ved et uheld får installeret en ord der spreder sig til hele organisationens interne netværk via en VPN-forbindelse, vil alverdens firewalls ikke kunne redde dig. Det er derfor vigtigt med en nogle klare sikkerhedsprocedurer, hvor der tages højde for ting som:

  • Authority – Hvem har adsvaret for hvad, mht. en given procedure
  • Scope – Hvem skal kende den enkelte procedure
  • Expiration – Hvornår vil en given procedure blive forældet?
  • Specificity – Hvad forventes der af en enkelte?
  • Clarity – Kan alle involverede parter rent faktisk forstå proceduren?

Opsummering

Der er altså rigtig mange ting man skal tage højde for når man arbejder med sikkerhedspolitiken i en organisation. Der er både tale om hardware-, software- samt menneskelige overvejelser man skal tage højde for på flere niveauer.
Jeg vil forhåbentlig komme med flere indlæg løbende, som jeg kommer igennem bog og forelæsninger.

Leave a Comment