Getting started with HTTP/2
HTTP/2 is here!
Now is the time to plan your strategy for adopting nextgen HTTP/2 technology. Making the transition is a no-brainer.
The new HTTP retains all of the methods, headers and status codes that are used in existing request/response cycles, paving an easy path from HTTP/1 to HTTP/2. All of our knowledge and experience as developers, maintainers and operators remains valid under the new protocol.
HTTP/2 is all about speed: sending fewer bytes, reducing latency, keeping connections open longer, multiplexing communications between peers, and prioritizing responses. And because all of this happens under-the-hood, everybody wins.
Here's the scoop:
- Fewer bytes. Request and response headers are compressed using the new HPACK compression algorithm. Header names are encoded as indexed numbers matched to a lookup table; header values are encoded using Huffman encoding; and redundant headers are eliminated for the second and all subsequent requests between the same peers.
- Reduced latency. HTTP's underlying TCP channel uses a brand new approach to framing, sending control and data frames on-the-wire using stream identifiers. This allows multiple requests and multiple responses to be made simultaneously, eliminating the pipeline blocking problem that plagued developers under HTTP/1.
- Longer connections. Sessions between user-agent and server remain open for a specified period of time, in anticipation of the likely need for further request/response cycles. The overhead of opening and closing socket connections is thus eliminated for the bursts of activity that occurs when loading typical web pages. The use of keep-alive headers is no longer needed.
- Multiplexing. User-agents and servers are true peers, and both can initiate requests. Once a user-agent initiates a session, the server is free to begin sending data that it thinks the user will likely need — even before the user has asked for it. This is the server speculative push protocol that has so many exciting new possibilities.
- Prioritization. Web page rendering begins even before all of the resources it needs are available. Stream prioritization provides a measure of control over this process allowing developers to specify which resources are most important and which can be delayed. Careful use of this allows browsers to size and flow page contents correctly the first time without jittery rerendering. Developers can specify the order and relative priority in which fonts, style sheets, media, and scripts are transmitted to the user.
Get all the gory details here:
A complete solution
Read Write Serve HTTP/2 server implements all of the methods you normally use —
OPTIONS, plus all of the others needed for REST APIs and WebDAV —
The availability and scope of each method is configurable on a resource by resource basis, which is reported to the user-agent through the
allow response header.
Content negotiation is automatically carried out for:
accept-types— MIME-type and
accept-language— i18n solutions,
accept-encoding— DEFLATE and GZIP compression.
Caching is configurable, honoring timestamps and Etags, making best use of both types of request headers:
if-modified-sinceheaders for timestamp caching,
if-none-matchheaders for Etag caching.
Range requests for partial downloads are supported for both simple and multipart byte-ranges when a
range request is received. Requests specified with
if-range headers — for timestamp or Etag conditional requests — are fully honored.
Both simple CORS and preflight CORS are fully implemented for:
Read Write Serve HTTP/2 server is a full-fledged static server. Use it wherever you need HTML, CSS, JS, images, fonts, multi-media, PDFs and other resources served in a classic Web scenario.
Modular and configurable
Read Write Serve HTTP/2 server is built with a modular architecture. Here's a high-level overview to see how it stacks up:
- The inner workings of the kernel are encapsulated within the high performance
- Sessions, sockets, and streams are exposed through a Node.js interface.
- Protocol level handling of the request/response cycle is carried out with dynamic modules — enabled or disabled to match individual needs.
- Dynamic module settings are configured using a declarative language.
This modular architecture allows every installation to be fine-tuned to the features it needs. The end result: fewer processing cycles and faster throughput.
Here's a quick rundown outlining the server's optional modules:
ip-access module can block blacklisted IP addresses from making requests to the server.
forbidden module prevents access to files paths that are in the public document area.
rbac module is a Role Based Access Control protocol allowing granular permissions to be set on resource paths.
cross-origin module is used to configure the CORS protocol.
accept-language module handles content language negotiation.
content-encoding module saves outgoing bandwidth by compressing responses.
etag module allows browsers to handle caching with fewer false positives.
cache-control module defines the browser caching instructions to be sent with each request.
user-agent module can recognize crawlers and selectively disable path access and/or speculative push notifications.
resource-masks module converts SEO-friendly URLs and microservice API calls into canonical server paths.
push-priority module configures the rules to use for requests that are candidates for HTTP/2 speculative push protocol.
information-headers module provides contextual information about response status codes.
custom-errors module displays error messages in a natural language suitable to the website's readers, and with CSS to match the website's styling rules.
counters module provides real-time access to basic server usage data.
policies module is used to configure the security and error logging policies that browsers should enforce on
Everything is available — there's a module for that!
Everything is configurable — don't need it? Don't enable it!
Everything is possible — can't find it? Build it and plug it in!
Extensible with plugins
The use cases are wide ranging:
- Access to the host's file system opens the door for templating solutions, storage solutions, and temp file needs.
- Access to databases means CRUD over a REST API is drop-dead simple.
- Access to serial port communications means IoT can be controlled and monitored remotely.
- Access to the host's processor allows real-time health-checks.
- Access to the host's network stack opens the possibility for sockets, IPC and real time communications via HTTP.
And unlike middleware solutions, every plugin is chained to the server's dynamic module stack to handle compression, caching, permissions, security, logging, and monitoring.
Read Write Serve HTTP/2 server combines all the goodness of a static server with the power of business-specific plugins. Welcome home!