Skip to content

Commit

Permalink
Document the use of the various offline/maintenances portals, and pro…
Browse files Browse the repository at this point in the history
…vide a skelton in the default config to allow sending some clients the legacy offline portal while using the online-maintenance portal for other clients.
  • Loading branch information
Kristian Grønfeldt Sørensen committed Jan 6, 2022
1 parent b4712b3 commit 5e15e66
Show file tree
Hide file tree
Showing 3 changed files with 74 additions and 7 deletions.
61 changes: 61 additions & 0 deletions Offline-portal.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
# Managing maintenance portal and offline-portal with Varnish 6 #

This document describes how to set up and operate both the legacy offline-portal and the online mainteance portal with Varnish 6.

Both the legacy offline-portal and the online-maintenance portal is a set of static files that must reside on a webserver that can be reached from Varnish. It is usually deployed on a webserver on the Varnish servers themselves, since the amount of traffic that they need to serve is minimal. This avoids the need for an extra set of servers in your fokusOn environment.

## Types of maintenance portals ##

With the introduction of Varnish 6, fokusOn now supports multiple maintenance portals.

Offline-portal. This provides a static channel list for multicast channels to allow basic zapping. The offline-portal is supported only on already integrated devices. No new device implementations will be made for the offline-portal. The offline-portal will be removed when support for all the currently supported devices has ended.

Along with Varnish 6, a new maintenance portal has been introduce, which is called the Online fallback portal. This serves the same purpose for multiscreen devices as the offline-portal has been serving for STB devices. It will provide a message on the screen of multiscreen devices, explaining to the user that that the backend system is under maintenance or similar. No channel zapping is available for the online fallback portal, since no CA tokens or playbak session URLS can be served when the backend is unavailable.

As a replacement for the offline portall newly integrated devices should use the "embedded fallback portal". This is a scraped version of the online portal (the normal fokusOn portal), that is embedded in to the bootimage or firmware of the STB. This allows the user to navigate the menus of the STB even while the system is offline, and will show explanative error messages when the user tries to activate something that is not available in offline-mode.

In varnish context and the rest of the document, offline-portal will refer to all of the above maintenance portals unless specifically stated otherwise.

## Configuring the offline-portal with Varnish 6 ###

To make the offline-portal work at the same time as the online fallback portal, you need to match you devices in Varnish based on the incoming heartbeat or healthcheck request from the client. This match can be done on the User-Agent header or in some cases the Host-header if multiscreen devices are served from different domains than the STB's. The Varnish 6 template provided by 24i includes an example on how to match on User-Agent headers.
Please note that the example assumes that the offline-portal is served from the directory /offline-portal/ on the webserver. You must adjust all absolute links in the offline-portal files to point to that path to avoid problems.
The varnish 6 config for the Oflline-portal includes all necessary URL rewrites, so mod_rewrite is no longer needed on the Apache webserver serving the offline-portal.

The online maintenance portal must reside in the webserver root of your webserver. It's recommended to set up health-check probes on the webserver to enable Varnish to only send requests to healthy servers. The default 24i Varnish 6 config includes probes for checking an Apache server using mod_status configured on `/server_status`. If you are using a different webserver


## Activating maintenance portal on a running Varnish instance.
To activate offline-portal on a running Varnish 6 instance, you just need to ensure that the file `/etc/varnish/offline-portal.enabled` exists. To switch it off again, just move the file out of the way -as per the following example:

**Activate offline-portal**
`touch /etc/varnish/offline-portal.enabled`

**Deactivate offline-portal**
`rm -f /etc/varnish/offline-portal.enabled`

This should be done on all Varnish 6 instances at the same time to ensure that devices are receiving the same response no matter what Varnish 6 instance they are connecting to.

The embedded fallback portal is not served from Varnish but uses the same endpoint to check activation-status as the online fallback portal, which is why it is relevant in Varnish 6 content. Since the embedded fallback portal is part of the firmware/bootimage of the STB, it is not covered in details here. This document only describes to centrally make the STB switch to the embedded fallback portal and back.
Please also note that if the STB is unable to reach the Varnish server(s), it will also switch to embedded fallback portal independent of the activation state on Varnsh 6.

## Controlling the rate at which users return to the normal portal. ##
### Oflline-portal ###
The index.html of the offline-portal includes the following section where you can control how often the offline-portal checks if the normal portal is back:

` function startChecking() {
checkIfPortalIsReady.periodical(1000 * 60 * 5); // Check every 5 minutes if main portal is ready
}

startChecking.delay(Math.round(Math.random()*(1000*60*5)));`

To check every 20 minutes, change 5 to 20 in both lines. We recommend that you use an interval of at least 20 minutes for production setups with more than 20.000 active users.

### Online fallback portal ###
The online fallback portal will check in every 5 minutes with the backend to know if the normal portal should be loaded. It will then wait a random period of time before loading the offline-portal. The upper bound for this waiting time is governed by the header "Random-Max-Delay" sent from Varnish. This header is defined in vcl_synth. To edit it, you just edit the folowing line in fokuson.vcl and reload your varnish config:

` set resp.http.Random-Max-Delay = "300s";`

For production setups with more than 20.000 active users, we recommend that you set it to at least 1200s. The only recognised unit for this header is seconds.


6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,13 @@
This repository contains Varnish configuration and install files to use with fokusOn. The files are targeted Varnish 6.0LTS but should work with later releases. Operators of fokusOn should fork this repository to keep track of local changes specific to their installation.

## Structure of the repository ##
The default VCL files are published in the *vcl* folder while the *install* folder holds customized system-related config-files that is used during installation. Please refer to Nordijas Varnish 6 install documentation for details. If you don't have that, please contact Nordija Professional Services.
The default VCL files are published in the *vcl* folder while the *install* folder holds customized system-related config-files that is used during installation. Please refer to Nordijas Varnish 6 install documentation for details. If you don't have that, please contact 24i.

## How to use this repository ##
We recommend that you fork this repository and create branches for each of your environments (Production, staging, test, development etc). This allows you to track your local changes for each environment, and to merge upstream changes from Nordija in to your local changes. Nordija strives to make sure that version specific changes[^1] will be prominent in the commit history of this respository (which constitutes the changelog), and/or in comments next to the relevant sections in the config file.
We recommend that you fork this repository and create branches for each of your environments (Production, staging, test, development etc). This allows you to track your local changes for each environment, and to merge upstream changes from 24i in to your local changes. 24i strives to make sure that version specific changes[^1] will be prominent in the commit history of this respository (which constitutes the changelog), and/or in comments next to the relevant sections in the config file.

When installing Varnish from scratch in a new environment, you should always use the latest tagged version. All tags will be made on the *master* branch, unless specifically noted elsewhere. If you are looking to update your Varnish configuration as part of or in preparation of upgrades of other components, then please thoroughly read trough the release notes of all involved components (including this repository) to ensure that the changes you deploy are compatible with each other.

As always, you should deploy changes to dev, test and staging environments before deploying to production. Any issue that arises because a configuration has not been tested in at least one non-production environment before deployment to production, can not be handled by Nordija as a SLA issue.
As always, you should deploy changes to dev, test and staging environments before deploying to production. Any issue that arises because a configuration has not been tested in at least one non-production environment before deployment to production, can not be handled by 24i as a SLA issue.

[^1]: Changes specific to a certain version of either fokusOn or another component like Unified-Search or Ads-system.
14 changes: 10 additions & 4 deletions vcl/fokuson.vcl
Original file line number Diff line number Diff line change
Expand Up @@ -27,12 +27,18 @@ sub offline_portal {
if (req.url == "/dwr/index.html"){
return(synth(410,"Gone"));
}
//Rewrite start URL's to hit the offline-portal
//Rewrite start URL's to hit the online-maintenance portal
if (req.url ~ "^/client-portal/(custom|device)/"){
if (req.http.Accept ~ "text/html"){
set req.url = "/";
// Use the following if statement to identify devices that needs to run the legacy offline-portal E.g.
// "<user-agent1>" and "<user-agent2>". Adapt to match your detployment.
if (req.http.User-Agent ~ "(<user-agent1>|<user-agent2>)"){
set req.url = "/offline-portal/index.html";
} else {
set req.url = regsub(req.url, ".*(/\w+(\.\w+(\?\w*)?)?$)","\1");
if (req.http.Accept ~ "text/html"){
set req.url = "/";
} else {
set req.url = regsub(req.url, ".*(/\w+(\.\w+(\?\w*)?)?$)","\1");
}
}
}

Expand Down

0 comments on commit 5e15e66

Please sign in to comment.