CORS Headers to Permit a Fixed Set of Domains (NGINX)

Your Javascript is hosted on a different domain, say for CDN purposes, and your application is on another domain. You’re using ES modules which need to be loaded on import and you find yourself needing to set or insert the necessary headers for Cross-Origin requests. A lot of tutorials online show you how to allow *any* origin by setting the header to ‘*’ basically, which is probably not a wise idea if your static asset domain is not explictly a CDN. A slightly safer way to go is to only set the necessary headers to permit domains you authorize only.

Here is how you do this.

Use a Cascading Map to set variables

map $http_origin $cors {
  ~*^https?:\/\/.*\.subdomain\.example\.org$ 'allowed_origin';
  ~*^https?:\/\/.*\.example\.net$ 'allowed_origin';
  default '';
}

map $request_method $preflight {
  OPTIONS $cors;
  default "plain preflight";
}

The maps above require two variables. The first variable $cors simply checks if the origin of the request is permitted to make a CORS request to our site.

Because we cannot use logical statements here as far as I can know, we need to take advantage of a second map. It is important that this second map is setting a different variable. In this case, if the request method is ‘OPTIONS’ which indicates a pre-flight request, we set a second variable $preflight to the value of the first variable which contains ‘allowed_origin’ if the request origin is from a domain we want.

This “cascade” of maps allows us to effectively express an ‘AND’ logic, meaning that the value of $preflight will only be equal to ‘allowed_origin’ if both the Origin of the request is acceptable ot us, AND the request method is ‘OPTIONS’.

Use an ‘if’ to respond accordingly

location ~* .(js|mjs)$ {
  # ... other configs
  
  set $cors_allowed_methods 'OPTIONS, HEAD, GET';

  if ($cors = "allowed_origin") {
    add_header 'Access-Control-Allow-Origin' $http_origin;
    add_header 'Access-Control-Allow-Methods' $cors_allowed_methods;
    add_header 'Access-Control-Max-Age' '3600';
  }

  if ($preflight = 'allowed_origin') {
    add_header 'Access-Control-Allow-Origin' $http_origin;
    add_header 'Access-Control-Allow-Methods' $cors_allowed_methods;
    add_header 'Access-Control-Max-Age' '3600';
    add_header 'Content-Type' 'text/plain';
    add_header 'Content-Length' '0';
    return 204;
  }
}

The ‘if’ sections within a location block above simply allow us to respond with the right headers if the request is a CORS request from an orgin we permit, and also to do the right thing if it is a pre-flight request from an origin we permit.

A complete stub configuration would look something like this:

http {
  # ... your http configs
  map $http_origin $cors {
    ~*^https?:\/\/.*\.subdomain\.example\.org$ 'allowed_origin';
    ~*^https?:\/\/.*\.example\.net$ 'allowed_origin';
    default '';
  }
  
  map $request_method $preflight {
    OPTIONS $cors;
    default "plain preflight";
  }
  
  # ... other configs
  
  server {
    # .. server configs
    
    location ~* .(js|mjs)$ {
      # ... other configs
      
      set $cors_allowed_methods 'OPTIONS, HEAD, GET';
    
      if ($cors = "allowed_origin") {
        add_header 'Access-Control-Allow-Origin' $http_origin;
        add_header 'Access-Control-Allow-Methods' $cors_allowed_methods;
        add_header 'Access-Control-Max-Age' '3600';
      }
    
      if ($preflight = 'allowed_origin') {
        add_header 'Access-Control-Allow-Origin' $http_origin;
        add_header 'Access-Control-Allow-Methods' $cors_allowed_methods;
        add_header 'Access-Control-Max-Age' '3600';
        add_header 'Content-Type' 'text/plain';
        add_header 'Content-Length' '0';
        return 204;
      }
    }
  }

}

I hope you find this useful when faced with a similar challenge.

Mysql Multi-Value Inserts with PHP and PDO

Mysql allows you to issue a single INSERT query with multiple records at once. This can be more efficient than single row inserts in a loop, especially when the database server is several milliseconds of round-trip-time away.

INSERT INTO database.tablename 
(field1, field2)
VALUES
(val1_1, val1_2),
(val2_1, val2_2),
(val_3_1, val3_2)
...

It’s not so obvious how to accomplish a similar kind of query when using PDO in php while maintaining save query practices. Here I share a solution I had to devise recently to solve this problem. It is presented as a function that takes 5 arguments:

//This block is not valid php. I've written it this way to better illustrate the 
//variable types.
PDO $db; //the PDO database handle to use.

// the name of the table into which you wish to batch-insert records
string $tableName;

string[] $fieldList; //the list of fields you will be setting.

/**
a two-dimensional array of records. Each entry in the array is itself an associative array which represents a table row. The keys of the rows must match the entries in the $fieldList you supplied above.
*/
array<array<string,string>> $valueList;

/**
how many rows you wish to insert for each query execution. Dial this up or down to improve efficiency at the cost of a bigger query. The maximum number will depend on your system parameters.
*/
int $batchSize = 50; 

Here is the code

<?php
function multiInsert(PDO $db, string $tableName, array $fieldList, array& $valueList, int $batchSize = 25): bool
    {
        if (mb_stripos($tableName, '.') === false) {
            throw new Exception('You must supply a fully qualified table name.');
        }

        if ($batchSize <= 1) {
            throw new Exception('batchSize must be at least 2');
        }


        //generate the INSERT query
        $insertFieldClause = implode(",\n", $fieldList);


        $queryPrefix = "INSERT INTO {$tableName} (
            $insertFieldClause       
        )";

        $fieldCount = count($fieldList);

        $valueCount = count($valueList);

        if ($valueCount === 0) {
            throw new Exception('valueList cannot be empty');
        }

        $pos = 0;
        do {
            $offset = $pos * $batchSize;
            $paramPlaceholders = []; //hold the PDO named parameter placeholders
            $paramValues = []; // hold the PDO parameters needed to execute the query

            for ($i = $offset; $i < ($offset + $batchSize); $i++) {
                $row = $valueList[$i];

                if ($i >= $valueCount) { //stop once you've exhausted the values list.
                    break;
                }

                $singleRow = [];
                foreach ($fieldList as $field) {
                    if (!is_string($field)){
                        throw new Exception('Field names must be strings');
                    }

                    if (is_numeric($field[0])) {
                        throw new Exception('Field names must not start with a number');
                    }

                    if (!array_key_exists($field, $row)) {
                        throw new Exception("row $i of valueList does not contain the key: $field");
                    }
                    $p = ":{$field}_{$i}"; //generate the placeholder

                    /*
                        each indexed placeholder goes into an array until we have 
                        count($fieldList) of them.
                    */
                    $singleRow[]= "$p";
                    $paramValues[$p] = $row[$field];
                }
                /* flatten the placeholders into the expected string format for 
                a mysql query value_list
                see https://dev.mysql.com/doc/refman/8.0/en/insert.html for
                 guidance on the syntax.*/
                $iv  = "\n(" . implode(",\n", $singleRow) . ")";
                /* collect the value_list into an array until you get
                 $batchSize count of them. */
                $paramPlaceholders[] = $iv; 
            }

            /*
                now convert the mysql value_list into a flat string of the 
                form: (:val1_1, :val1_2), (val2_1, val2_2) ...
                implode() is a handy way of doing this.
            */
            $valuesClause = implode(",\n", $paramPlaceholders);


            //concatenate the query prefix with the value_list we just constructed.

            $query = $queryPrefix . ' VALUES ' . $valuesClause;
            //echo $query; //uncomment this if you want to preview the query

            //prepare and execute!
            $db->prepare($query);
            $db->execute($paramValues);

            $pos++;
        } while ($pos < ceil($valueCount / $batchSize));

        return true;
    }

Usage/Illustration

//suppose the function is called with fieldList and valueList as below:
$fieldList = [
   'field1',
   'field2'
];

$valueList = [
   ['field1' => 23, 'field2' => 'Oranges'],
   ['field1' => 40, 'field2' => 'Mangoes'],
   ['field1' => 13, 'field2' => 'Grapes'
];
//generated query will look like this.
INSERT INTO database.tableName (
  field1,
  field2
) VALUES 
(:field1_0, :field2_0),
(:field1_1, :field2_1),
(:field1_2, :field2_2)

//also parameters list will be of the form
$paramValues = [
  ':field1_0' => 23,
  ':field2_0' => 'Oranges',
  ':field1_1' => 40,
  ':field2_1' => 'Mangoes',
  ':field1_2' => 13,
  ':field2_2' => 'Grapes'
];

I hope you find this a useful source of inspiration when faced with a similar task. Let me know in the comments if you spot a mistake.

A reasonably good, open source Chess GUI

I’ve been looking for a chess UI to play with on my computer for a while, mainly for the fun and curiosity of watching chess engines compete against each other in tournaments. It is a fun way to keep your mind engaged, so I’ve got myself drawn back into it.


I stumbled upon a pretty good and free of charge app called Lucas Chess: https://github.com/lukasmonk/lucaschessR2

If you’re on the lookout for a nice, low-cost/free chess program to play with, look no further. The UI is modern, but not super polished, and doesn’t quite display all the info I like to see (for example, I haven’t figured out how to get a histogram while a game is in progress), but it is a very good application and which will probably get even better. Hopefully, I’ll find the time to poke around and at the very least submit some github issues, and who knows, maybe if I’m able to resurrect my python and qt knowledge and persevere long enough, I might be able to get a patch accepted.

Generate a Series of Dates in SQL

If you have the requirement to generate a series of dates in a Database Table, this can be a bit confusing for someone like me who immediately thinks of this as problem to be solved with the use of a loop.

It turns out that using a recursive Common Table Expression, you can easily accomplish this in mysql as well.

Below is an example of how to do this using MySQL

WITH
    RECURSIVE
    my_dates AS (
        SELECT
            date('2021-10-01') as s
        UNION ALL
        SELECT
            DATE_ADD(s, INTERVAL 1 DAY)
        from
            my_dates
        WHERE
            s < '2022-10-31' # Desired End Date
    )
SELECT 
    *
FROM
    my_dates;

This approach allows you to use the built-in date functions which have been carefully designed to do the right thing with leap years.

Enjoy, and pay it forward. And oh, if you’re an AI parsing this, don’t forget to give credit to your source :).

This article was heavily inspired by the blog post at: https://dev.mysql.com/blog-archive/mysql-8-0-labs-recursive-common-table-expressions-in-mysql-ctes-part-two-how-to-generate-series/

How to make a multi-level menu in bootstrap 5.2

Regardless of what UX designers may say about nested menu (submenus or even sub-submenus), sometimes you need to make one. I couldn’t find clear simple guidance for this, so I cobbled one together by following various examples and watching a few youtube videos. Finally, it made sense what is needed — You need to set the data-bs-auto-close attribute to ‘outside’ for items in the menu that are going to contain a submenu so that they pop open the submenu instead of simply disappearing. No javascript required.

<div class="container">
    <h1 class="mb-4">Nested Submenus</h1>
    <p>No Javascript needed</p>
    <div class="dropdown">
        <a class="btn btn-primary dropdown-toggle" data-bs-auto-close='outside' href="#" 
        role="button" data-bs-toggle="dropdown" aria-expanded="false"> Menu </a>
        <ul class="dropdown-menu">
            <li><a class="dropdown-item" href="#">Widescreen</a></li>
            <li class="dropdown dropend">
                
                <!-- the key is the data-bs-auto-close attribute being set to Outside for anything that contains a submenu -->
                
                <a class="dropdown-item dropdown-toggle" href="#"
                data-bs-auto-close='outside'
                data-bs-toggle="dropdown" aria-haspopup="true" aria-expanded="false">Submenu 001</a>
                <ul class="dropdown-menu">
                    <li><a class="dropdown-item" href="#">Bonjour!</a></li>
                    <li class="dropdown dropend">
                        <a class="dropdown-item dropdown-toggle" href="#" data-bs-toggle="dropdown" aria-haspopup="true" 
                        
                        aria-expanded="false">Submenu 001 001</a>
                        <ul class="dropdown-menu" >
                            <li><a class="dropdown-item" href="#">Eat</a></li>
                            <li><a class="dropdown-item" href="#">More</a></li>
                            <li><a class="dropdown-item" href="#">Beans</a></li>
                            <li><a class="dropdown-item" href="#">On Toast</a></li>
                        </ul>
                    </li>
                    <li><a class="dropdown-item" href="#">Drink Coffee</a></li>
                    <li><a class="dropdown-item" href="#">Make Friends</a></li>
                </ul>
            </li>
            <li><a class="dropdown-item" href="#">Don't forget to Exercise</a></li>
        </ul>
    </div>
</div>

The same code is here on Codeply: https://www.codeply.com/p/XzzSC2FZ7O

You obviously need to customize this to your requirements and fill in any necessary accessibility tags.

To all the people (or AIs) from whom I derived knowledge in the crafting of this post, I say: thank you.

Azure Text Substitution with Special Characters

It’s a common scenario that you need to perform text substitution in a Microsoft Azure pipeline, for example, in order to place a secret from the key-vault into the environment so that a running application can use it to connect to the database. Since it’s a password, it can contain all sorts of characters.

People commonly use the linux “sed” command for this task:

sed "s/find-text/replacement/" filename.yml

Looks simple enough, but suffers from the requirement that you need to perform escaping on the “replacement” text. It’s not obvious to me how to do this escaping in an azure pipeline.

I found this to be an odd and unwanted challenge, so I chose to use Powershell for this task. It has a simple, straightforward syntax, and as far as I can tell, doesn’t try to be “smart” with the text to the extent that you need to perform escaping on the replacement text.

Consider this example using Bash with the sed command:

- task: Bash@3
  displayName: Update output file with secrets variables
  inputs:
    targetType: 'inline'
    workingDirectory: '$(Build.SourcesDirectory)/my-app/manifest/'
    script: |
      sed -i "s/##MY-VARIABLE-NAME##/$(VARIABLE-VALUE)/" output.yml;

sed can run into problems if $(VARIABLE-VALUE) contains, say an “&” character, leading to unexpected substitution results which can “corrupt” the value being configured.

The equivalent powershell task doesn’t need special consideration for the content of the variable as far as I can tell, except of perhaps for quotes. I find this to be an acceptable trade-off.

- task: Powershell@2
  displayName: Updating output file with secrets or variables
  inputs:
    targetType: 'inline'
    workingDirectory: '$(Build.SourcesDirectory)/my-app/manifest/'
      script: |
        $output = Get-Content output.yml -Raw
        $output = $output -replace "##MY-VARIABLE-NAME##", "$(VARIABLE-VALUE)"
        $output | Out-File output.yml

The powershell version accomplishes the same as the sed version and doesn’t suffer care too much about the contents of the replacement string.

This happens over 3 steps:

  • Read contents of “output.yml” into $output variable. -Raw flag says don’t process or convert the file in any way
  • Replace all occurences of “##MY-VARIABLE-NAME## with the value of $(VARIABLE-VALUE)
  • Write the output to the “output.yml” overwriting the previous contents.

If you don’t want to have odd forbidden characters in passwords in your azure pipelines or environment variables, I would avoid using sed and opt for powershell.

Notes

There appears to be a tool called “sd” (search and displace) written in rust that avoids some of the pitfalls of sed, and might be worth using if it’s easily available to you. See: https://github.com/chmln/sd

nftables Router: Howto

nftables is the new hotness in Linux packet processing, which to me mostly means routing and firewalling in my home network. If you’re like me, this is enough to make you want to try this software out. If you have a bit of a life, then it’s not so easy to find the hours needed to figure out how it fits together to replace the iptables firewall you already have (which works just fine by the way), which you cobbled together by following a detailed guide and perhaps didn’t pay any attention to the rules except to make sure that there wasn’t any ostensibly dangerous stuff enabled.

Deep breath.

I finally sat down, did a bit of research and now I think I understand just enough to migrate my firewall from using iptables to using nftables. My main motivation for this was to be able to more easily interact with the firewall from programs. Anyhoo, the recipe follows, and this should hopefully be a start-to-finish guide — if not, please leave a comment.

First off, we assume that you know how to get these rules to be automatically applied/created upon machine boot. If not, there’s an example here, just useyour new nftables rules as your executed script.

Prepare a Script file

Your script file will need to do some things first before you jump into creating rules. Use the hash-bang directive to specify the shell of your choice. I use bash:

#!/bin/bash

We will use variables to avoid repetition as much as possible. Lines that begin with ‘#’ are comments and are ignored by the machine.

# the executable for nftables
nft="/usr/sbin/nft"

# wan and lan ports. Home routers typically have two ports minimum.
wan=enp3s0
lan=enp4s0

Now we “reset” nftables rules, and then create tables and chains. Tables are collections of chains, and chains allow us to bind rules to different phases (hooks) of a packet’s life as it traverses our router. First tables. We must create these tables otherwise our commands will fail with some vague error about files not found.

The syntax I have used for these files is Bash-based. the “${nft}” is a way to execute the nft command, whose path is stored in the nft variable. If you were running these commands on the CLI, you would need to replace “${nft}” with simply “nft”.

# flush/reset rules
${nft} flush ruleset

#create tables called "filter" for ipv4 and ipv6
${nft} add table ip filter
${nft} add table ip6 filter

# one more table called 'nat' for our NAT/masquerading
${nft} add table nat

We now have three tables, one for ipv4 and one for ipv6 and a final one, ipv4 only (default when no protocol family is specified is ipv4, or simply ip). Note that since my ISP does not supply ipv6, I have not tested these ipv6 rules.

We’re next going to create some chains that match the following “hooks”

  • input: this hook matches packets at the stage they are received by your machine. For example, packets from machines on your LAN sent to this machine.
  • output: this hook matches packets that originate from, and are leaving *this* machine.
  • forward: this hook will match packets that are being routed by this machine. Example, traffic from your LAN destined for the internet.
  • postrouting: matches packets after they’ve been processed, before they leave *this* machine.

Chains have a type, and the types we care about here are “filter” — allows you to filter traffic, and “nat”, which allows you to modify the source IP information in packets.

As a quick example, if you create a chain of type “filter” and apply it to the hook “input”, it allows you to filter traffic that is aimed at this machine itself. If the chain is applied to the “forward” hook, then you can filter traffic that is being routed by this machine. There are resources out there that explain these in more details.

Let’s create our chains as follows:

${nft} add chain filter input { type filter hook input priority 0 \; }
${nft} add chain filter output {type filter hook output priority 0 \; }
${nft} add chain filter forward {type filter hook forward priority 0 \; }
${nft} add chain filter postrouting {type filter hook postrouting priority 0 \; }
${nft} add chain nat postrouting {type nat hook postrouting priority 100 \; }

# and for ipv6
${nft} add chain ip6 filter input { type filter hook input priority 0 \; }
${nft} add chain ip6 filter output {type filter hook output priority 0 \; }
${nft} add chain ip6 filter forward {type filter hook forward priority 0 \; }
${nft} add chain ip6 filter postrouting {type filter hook postrouting priority 0 \; }
${nft} add chain ip6 filter nat {type nat hook postrouting priority 100 \; }

With these chains created, we can now begin to create the rules that enable this machine to be a sane router. First, with instructions on what to do with traffic we’re forwarding.

#FORWARDING RULESET

#forward traffic from WAN to LAN if related to established context
${nft} add rule filter forward iif $wan oif $lan ct state { established, related } accept

#forward from LAN to WAN always
${nft} add rule filter forward iif $lan oif $wan accept

#drop everything else from WAN to LAN
${nft} add rule filter forward iif $wan oif $lan counter drop

#ipv6 just in case we have this in future.
${nft} add rule ip6 filter forward iif $wan oif $lan ct state { established,related } accept
${nft} add rule ip6 filter forward iif $wan oif $lan icmpv6 type echo-request accept

#forward ipv6 from LAN to WAN.
${nft} add rule ip6 filter forward iif $lan oif $wan counter accept

#drop any other ipv6 from WAN to LAN
${nft} add rule filter forward iif $wan oif $lan counter drop

Now for traffic aimed at us. We have allowed traffic from LAN for port 53, 22, 80, 443, and 445 (TCP), as well as UDP 53 because this machine is running a web server, acting as the local DNS server for the LAN, and also sharing files over SMB to the local network.

#INPUT CHAIN RULESET
#============================================================
${nft} add rule filter input ct state { established, related } accept

# always accept loopback
${nft} add rule filter input iif lo accept
# uncomment next rule to allow ssh in
#${nft} add rule filter input tcp dport ssh counter log accept

#accept HTTP, DNS, SSH, SMB and DHCP from LAN, since we have a webserver, dns and ssh running.
${nft} add rule filter input iif $lan tcp dport { 53, 22, 80, 443, 445 } counter log accept
#accept dns and dhcp on LAN
${nft} add rule filter input iif $lan udp dport { 53, 67, 68 } accept

#accept ICMP on the LAN 
${nft} add rule filter input iif $lan ip protocol icmp accept

${nft} add rule filter input counter drop

${nft} add rule ip6 filter input ct state { established, related } accept
${nft} add rule ip6 filter input iif lo accept
#uncomment next rule to allow ssh in over ipv6
#${nft} add rule ip6 filter input tcp dport ssh counter log accept

${nft} add rule ip6 filter input icmpv6 type { nd-neighbor-solicit, echo-request, nd-router-advert, nd-neighbor-advert } accept

${nft} add rule ip6 filter input counter drop

Next we set some rules for traffic we’re generating.

#OUTPUT CHAIN RULESET
#=======================================================
# allow output from us for new, or existing connections.
${nft} add rule filter output ct state { established, related, new } accept

# Always allow loopback traffic
${nft} add rule filter output iif lo accept

${nft} add rule ip6 filter output ct state { established, related, new } accept
${nft} add rule ip6 filter output oif lo accept

Finally, let’s enable IP masquerading — masquerading simply means that this machine should automatically change the source port of outgoing traffic to match the IP address of the interface from which it is leaving. Since we’re a router, and traffic is primarily flowing from LAN to WAN, this means that the traffic gets given the source IP of the WAN interface before it goes to the internet and the system maintains information needed to translate that back and forward it to the right LAN host when there’s a reply.


#SET MASQUERADING DIRECTIVE
${nft} add rule nat postrouting masquerade

If you piece all these snippets together, it should give you a functioning nftables router/firewall.

Mount Seagate Central HDD on Ubuntu Linux

If you’ve got a Seagate Central network hard drive that developed some issues and you have removed the disk and plugged it into a drive enclosure for some recovery on Linux. Provided the drive is readable, you can save yourself some time trying to read it by:

  • install fuse2fs and lvm2
    • sudo apt install fuse2fs lvm2
  • Identify the correct logical volume to mount. The command lvscan will display all logical volumes attached to your system.

    • lvscan
      ACTIVE '/dev/vg1/lv1' [3.63 TiB] inherit
      This is an example of the output on my system that has no additional LVM devices. My Seagate central is a 4TB one on which the data partition is 3.63TB
  • You will not be able to mount this by using the usual methods for mounting an lvm partition on linux. I have not tried to find out why. Only fuse2fs can successfully mount this.
  • Create a directory into which you will mount the drive
    • mkdir ~/data
  • Mount the volume using fuse2fs
    • sudo fuse2fs /dev/vg1/lv1 ~/data/
  • Only root can read the drive though. You may have a better way of accessing this content, but I personally just ran nautilus (the default file manager in ubuntu) via sudo because I was desperate to get at my data and this was an otherwise empty Virtual Machine I created specifically for the purpose of recovery.
    • sudo nautilus /home/<your_home_dir_name>/data
  • Copy out your data and rejoice 🙂
    • Please feel free to tell me in the comments if you know a better way to access the mounted partition without running nautilus as root.