Using Tinymce with VueJS 3 and Vite

TinyMCE is a nice wysywig editor, and in my experience, is more performant with large, rich text content than Quill, which is the default editor used by the Primevue framework.

If you want to use TinyMCE with your VueJS 3 project, unfortunately, the documentation is not very great about how to do this. Here is what worked for me (using the composition API).

Install Dependencies

It is important to install tinyMCE and tinyMCE-vue exactly as follows:

npm install tinymce
npm install "@tinymce/tinymce-vue@^5"

After it is installed, then you need to load TinyMCE and TinyMCE-vue in your component in a specific order, otherwise it will load the cloud version, which requires a License key.

If you have the means to support the project, please do so, but it is a little annoying that it is difficult to use the GPL version of the library without inadvertently loading the cloud version.

<script setup>
//import the usual components you're using, including Vue
//... then import the tinymce bits like so:
import tinymce from "tinymce";
import "tinymce/icons/default/icons.min.js";
import "tinymce/themes/silver/theme.min.js";
import "tinymce/models/dom/model.min.js";
import "tinymce/skins/ui/oxide/skin.js";

import "tinymce/skins/ui/oxide/content.js";

import "tinymce/skins/content/default/content.js";

import "tinymce/skins/ui/oxide/content.js";
import Editor from "@tinymce/tinymce-vue";

<Editor v-model:visible="modalContent"
        :init="{promotion: false, branding: false, license_key: 'gpl', height: '400px',
        skin: 'oxide', skin_url: 'default' }"

That should render a tinyMCE editor in your vue js component. Some more information can be found here:

Fixed: Chromecast with Google TV No Signal

The chromecast with Google TV is a low-ish priced, decent dongle if your TV has a great display but the UI is slow or outdated. I have three of them, and two of them are mostly okay, except that one of them regularly goes into a state where there is no signal on power up. The symptoms are quite similar to those described on this thread:

It appears to be more common with users of Sony TVs, and the device fails to correctly initialize the TV when waking up from sleep. That’s the clue. It has all the hallmarks of a software defect, but Google’s support will waste your time and make you perform pointless device resets, and then offer you a warranty replacement if your device is still in warranty — otherwise, you’re on your own.

This annoyed me enough that I briefly considered ditching my Pixel phone, buying an iPhone and replacing all the media devices in the house with Apple TV 4Ks… but I still find iOS a frustrating enough experience on an iphone that I decided to try another tactic to fix the issue.

How to fix it

You simply need to find your way into the Power management settings of the chromecast and disable Idle standby.

  1. Navigate to device Settings
  2. Select the “System” option.
  3. Navigate to “Power and Energy”
  4. Set “When Inactive” to “Never”.

The device never goes to sleep, and as far as my testing goes, doesn’t get back into the state where the HDMI handshake with the TV fails, thus, switching inputs to the Chromecast initializes the display correctly and you don’t need to power cycle.

As far as power consumption, it’s got a 7.5 watt power supply, and so in the worst case, it would use 65.8KW of energy a year if it were running at full power consumption throughout. The worst case is not the typical case. You do the math and decide if you’re okay with this. I am fine with this, and I save energy in other ways around the house.

Using VueJS 3 ES Module without a build step

Some hopefully helpful notes for using Vue JS 3 without a build step — that is, using the browser build. If you’re reading this from the future, keep in mind that this represents my understanding as of time of writing, and using Vue JS 3.3.4. Things may have changed for the better (or worse).

It goes without saying that you would need the vue javascript from CDN. Whether you choose the Global build or the ESM module is probably down to you, but it may be worth choosing the ES Module version if you expect to use many other libraries and want to reduce noise in the global space. The documentation mostly uses ES Module syntax, so that is another good reason to go with the ES build.

Import Maps

You will need to create import maps, as this makes using vue and other libraries and components easier. Libraries I have come across, like assume that there is an import map, and when you use one component from the library which itself uses another component, they expect an import map exists to find the relevant JS for the components they referenced.

As far as I know for the moment, you need to manually curate your import map and update it as you add libraries. Here is an example import map with vue JS and

<script type="importmap">
        "imports": {
            "vue": "{url_prefix_to_vue}/",
            "": "{url_prefix_to_datatables_VUE_build}",
            "": "{url_prefix_to_datatables}/Buttons-2.4.2/js/dataTables.buttons.min.mjs",
            "": "{url_prefix_to_datatables}/Buttons-2.4.2/js/buttons.bootstrap5.min.mjs",
            "": "{url_prefix_to_datatables}/Buttons-2.4.2/js/buttons.html5.min.mjs",
            "": "{{url_prefix_to_datatables}/dataTables.bootstrap5.min.mjs",
            "": "{url_prefix_to_datatables}/jquery.dataTables.min.mjs",
            "": "{url_prefix_to_datatables}/JSZip-3.10.1/jszip.min.js",
            "": "{url_prefix_to_datatables}/Responsive-2.5.0/js/dataTables.responsive.min.mjs",
            "": "{url_prefix_to_datatables}/Responsive-2.5.0/js/responsive.bootstrap5.min.mjs",
            "jquery": "{url_prefix_to_jquery_es_build}/.esm-shim.js"


Datatables still relies on Jquery, so you need an ES build of jquery. The files themselves don’t have to be served from a CDN, they can be served from your own domain. Keep in mind the necessity of Cross-Origin-Resource Sharing if the JS files are from an alternate domain.

Template syntax

Kebab Case only

When developing and using components, you have to use kebab-case to refer to your components, as opposed to the Pascal case that is commonly used in the documentation. For example, to create a DataTable from above, you would write the tag as <data-table> NOT <DataTable>.

Similar treatment is required for event handling.

Short Tags

Avoid the use of short style tags. Example <data-table … />. I found that the in-browser compiler doesn’t handle these well. If you’re experiencing issues like missing markup in your browser when you inspect the rendered page, then you likely have used a component and closed it with the short style. Always close your tags fully <data-table …> … </data-table>.

Find Native Libraries

If you’re starting out a new project or re-writing an old project based on say bootstrap, jquery, and a collection of jquery plugins for more advanced UI interactions, it is comforting to read that you could drop vue JS in and slowly work your way through the conversion. Indeed, it is relatively easy to start using vue JS quickly in a legacy project, but there is very little support available. The documentation is largely written on the assumption that you’re using a build step and you have a magical development environment where you just write a bunch of Single-File Components, link them up, push a button, some magic happens, and you have a nicely written vue front end deployed for you.

If you’re reading this article, then you probably know that legacy codebases aren’t always like this. If you’re translating or rewriting an old app, don’t just assume you should get the latest versions of your existing libraries, stick Vue JS in and continue to evolve. You should pause and look at the Vue JS landscape and find native libraries that meet your requirements, and adopt those instead.

For example, in place of bootstrap and, you could look into PrimeFlex and PrimeVue which combined provide you with probably all the layout and theming and advanced html components you may need in a modern application. These two libraries on their own probably remove the need for any other external dependencies.

Bear in mind, that your import map is being curated manually, and will need to grow to accomodate PrimeVue if you choose to adopt it. You may want to search for, or create a script to generate import maps. If you find or create one, please drop a comment, this could be a genuinely useful tool. Bonus points if the tool you create doesn’t depend on nodeJS (I have no beef with it, it’s just one fewer ecosystem to worry about security vulnerabilities).

A single python, php, rust, or go binary that could generate import maps would be a lot easier sell than a node JS package for integrating into a dev-ops environment that doesn’t otherwise have other node JS dependencies. I am open to be schooled on this.

Food for thought

It’s unclear to me the cost of having in-browser compilation of a vue JS app when using the CDN approach without a build step. Modern computers are fast, and modern browsers have fast javascript interpreters, so performance has not been a concern for the kinds of projects I have played with.

Configure Mysql to use Lets Encrypt Certificates

You have already deployed Lets Encrypt certificates for your web server and you have a mysql server hosted on the same domain and wish to also leverage this certificate for TLS connections to your mysql instance. Read on.

Set Folder Permissions

Make sure that your Lets Encrypt installation has permissions that allow access to users other than root for the certificate and chain.

  • as of this writing (2023-12-21 13:00 UTC), the documentation of Lets Encrypt: informs you that you should set the permissions for the live and archive directories in your installation to 755 if you don’t intend to downgrade versions — this should be an easy yes for most people, as if you find yourself in a situation where you intend to downgrade your certbot, then you presumably know what you’re in for.
chmod 0755 /etc/letsencrypt/{live,archive}

Allow Mysql User to Read the privkey.pem

Find out which user your mysql is running as. This can be found in your mysqld.conf or similar, for example:

# * Basic Settings
user            = mysql 

On my machine, the configuration indicates that mysql is running as a user named mysql.

Users are typically created together with the same name group. I can use ACLs to grant Mysql group access to the private key generated by Lets Encrypt for my domain with the following command without opening the file up to everyone:

setfacl -m g:mysql:r-x /etc/letsencrypt/live/

Just as a quick explainer, Lets Encrypt will put the currently active version of the certificate and private key etc in the /etc/letsencrypt/live/$domain/ directory, and this naturally will vary by your domain. If this is not the case, you should look at the current Lets Encrypt documentation for where it is placing files as this could change in the future. The goal here is to make sure that Mysql is allowed to read the private key. You don’t want to change the file permissions to be permissive because the private key is a very sensitive file and you don’t want this falling into the wrong hands.

Create the CA File for Lets Encrypt

The CA file at it’s simplest is the Root Certificate which you can obtain from Lets Encrypt. We will create a file containing the Root (self-signed) and the Intermediate (self-signed) certificates for Lets Encrypt which you can download from: .

Use a text editor of your choice and place the Root Certificate for Lets Encrypt AND the intermediate certicate into a single file named for the purpose of this exercise, ‘ca-cert.pem’, which I have chosen to place in /etc/letsencrypt/. As of now, the current CA files I used are: AND

The wget commands below will create this file if you supply the valid URLs to the current CA certs.

wget -O ->> /etc/letsencrypt/ca-cert.pem

wget -O ->> /etc/letsencrypt/ca-cert.pem

chmod 755 /etc/letsencrypt/ca-cert.pem

Ensure that AppArmor is not blocking Mysql

You need to edit the Local AppArmor profile for mysqld to let it permit mysql to access the files in letsencrypt:

nano /etc/apparmor.d/local/usr/.sbin.mysqld

Add the configuration to permit read access to the letsencrypt files

  /etc/letsencrypt/live/* r,
  /etc/letsencrypt/archive/* r,
  /etc/letsentcypt/ca-cert.pem r,

Note that the path above depends on your environment. My example is using my domain. You may wish to for example, have Mysqld have access to all live letsencrypt certs, and in such a case, you would need /etc/letsencrypt/live/** r, and /etc/letsencrypt/archive/** r, instead, which grants access recursively to all files and subdirectories in the live directory. We need access to the archive because ultimately, the files in live are merely symlinks to the files in archive.

After modifying an AppArmor profile, you need to reload it:

apparmor_parser -r /etc/apparmor.d/usr.sbin.mysqld 

Configure Mysql to Enable TLS

That is a little outside the scope of this post, but there are plenty of resources out there to guide you, for example: .

The key things you need to do differently from are:

  • ssl-cert should point to the cert.pem file. You cannot use the fullchain.pem because Mysql doesn’t select the correct certificate from a chain as of my current version 8.0.35. Instead, you can create a CA file as described above that contains all the certificates needed to establish a chain up to the trusted root.
  • ssl-key should point tothe private key to the privkey.pem
  • ssl-ca should point to the /etc/letsencrypt/ca-cert.pem file you generated earlier using the Root certificates for

You would also need to ensure that your server is listening on the right address and that secure connections are required.

CORS Headers to Permit a Fixed Set of Domains (NGINX)

Your Javascript is hosted on a different domain, say for CDN purposes, and your application is on another domain. You’re using ES modules which need to be loaded on import and you find yourself needing to set or insert the necessary headers for Cross-Origin requests. A lot of tutorials online show you how to allow *any* origin by setting the header to ‘*’ basically, which is probably not a wise idea if your static asset domain is not explictly a CDN. A slightly safer way to go is to only set the necessary headers to permit domains you authorize only.

Here is how you do this.

Use a Cascading Map to set variables

map $http_origin $cors {
  ~*^https?:\/\/.*\.subdomain\.example\.org$ 'allowed_origin';
  ~*^https?:\/\/.*\.example\.net$ 'allowed_origin';
  default '';

map $request_method $preflight {
  OPTIONS $cors;
  default "plain preflight";

The maps above require two variables. The first variable $cors simply checks if the origin of the request is permitted to make a CORS request to our site.

Because we cannot use logical statements here as far as I can know, we need to take advantage of a second map. It is important that this second map is setting a different variable. In this case, if the request method is ‘OPTIONS’ which indicates a pre-flight request, we set a second variable $preflight to the value of the first variable which contains ‘allowed_origin’ if the request origin is from a domain we want.

This “cascade” of maps allows us to effectively express an ‘AND’ logic, meaning that the value of $preflight will only be equal to ‘allowed_origin’ if both the Origin of the request is acceptable ot us, AND the request method is ‘OPTIONS’.

Use an ‘if’ to respond accordingly

location ~* .(js|mjs)$ {
  # ... other configs
  set $cors_allowed_methods 'OPTIONS, HEAD, GET';

  if ($cors = "allowed_origin") {
    add_header 'Access-Control-Allow-Origin' $http_origin;
    add_header 'Access-Control-Allow-Methods' $cors_allowed_methods;
    add_header 'Access-Control-Max-Age' '3600';

  if ($preflight = 'allowed_origin') {
    add_header 'Access-Control-Allow-Origin' $http_origin;
    add_header 'Access-Control-Allow-Methods' $cors_allowed_methods;
    add_header 'Access-Control-Max-Age' '3600';
    add_header 'Content-Type' 'text/plain';
    add_header 'Content-Length' '0';
    return 204;

The ‘if’ sections within a location block above simply allow us to respond with the right headers if the request is a CORS request from an orgin we permit, and also to do the right thing if it is a pre-flight request from an origin we permit.

A complete stub configuration would look something like this:

http {
  # ... your http configs
  map $http_origin $cors {
    ~*^https?:\/\/.*\.subdomain\.example\.org$ 'allowed_origin';
    ~*^https?:\/\/.*\.example\.net$ 'allowed_origin';
    default '';
  map $request_method $preflight {
    OPTIONS $cors;
    default "plain preflight";
  # ... other configs
  server {
    # .. server configs
    location ~* .(js|mjs)$ {
      # ... other configs
      set $cors_allowed_methods 'OPTIONS, HEAD, GET';
      if ($cors = "allowed_origin") {
        add_header 'Access-Control-Allow-Origin' $http_origin;
        add_header 'Access-Control-Allow-Methods' $cors_allowed_methods;
        add_header 'Access-Control-Max-Age' '3600';
      if ($preflight = 'allowed_origin') {
        add_header 'Access-Control-Allow-Origin' $http_origin;
        add_header 'Access-Control-Allow-Methods' $cors_allowed_methods;
        add_header 'Access-Control-Max-Age' '3600';
        add_header 'Content-Type' 'text/plain';
        add_header 'Content-Length' '0';
        return 204;


I hope you find this useful when faced with a similar challenge.

Mysql Multi-Value Inserts with PHP and PDO

Mysql allows you to issue a single INSERT query with multiple records at once. This can be more efficient than single row inserts in a loop, especially when the database server is several milliseconds of round-trip-time away.

INSERT INTO database.tablename 
(field1, field2)
(val1_1, val1_2),
(val2_1, val2_2),
(val_3_1, val3_2)

It’s not so obvious how to accomplish a similar kind of query when using PDO in php while maintaining save query practices. Here I share a solution I had to devise recently to solve this problem. It is presented as a function that takes 5 arguments:

//This block is not valid php. I've written it this way to better illustrate the 
//variable types.
PDO $db; //the PDO database handle to use.

// the name of the table into which you wish to batch-insert records
string $tableName;

string[] $fieldList; //the list of fields you will be setting.

a two-dimensional array of records. Each entry in the array is itself an associative array which represents a table row. The keys of the rows must match the entries in the $fieldList you supplied above.
array<array<string,string>> $valueList;

how many rows you wish to insert for each query execution. Dial this up or down to improve efficiency at the cost of a bigger query. The maximum number will depend on your system parameters.
int $batchSize = 50; 

Here is the code

function multiInsert(PDO $db, string $tableName, array $fieldList, array& $valueList, int $batchSize = 25): bool
        if (mb_stripos($tableName, '.') === false) {
            throw new Exception('You must supply a fully qualified table name.');

        if ($batchSize <= 1) {
            throw new Exception('batchSize must be at least 2');

        //generate the INSERT query
        $insertFieldClause = implode(",\n", $fieldList);

        $queryPrefix = "INSERT INTO {$tableName} (

        $fieldCount = count($fieldList);

        $valueCount = count($valueList);

        if ($valueCount === 0) {
            throw new Exception('valueList cannot be empty');

        $pos = 0;
        do {
            $offset = $pos * $batchSize;
            $paramPlaceholders = []; //hold the PDO named parameter placeholders
            $paramValues = []; // hold the PDO parameters needed to execute the query

            for ($i = $offset; $i < ($offset + $batchSize); $i++) {
                $row = $valueList[$i];

                if ($i >= $valueCount) { //stop once you've exhausted the values list.

                $singleRow = [];
                foreach ($fieldList as $field) {
                    if (!is_string($field)){
                        throw new Exception('Field names must be strings');

                    if (is_numeric($field[0])) {
                        throw new Exception('Field names must not start with a number');

                    if (!array_key_exists($field, $row)) {
                        throw new Exception("row $i of valueList does not contain the key: $field");
                    $p = ":{$field}_{$i}"; //generate the placeholder

                        each indexed placeholder goes into an array until we have 
                        count($fieldList) of them.
                    $singleRow[]= "$p";
                    $paramValues[$p] = $row[$field];
                /* flatten the placeholders into the expected string format for 
                a mysql query value_list
                see for
                 guidance on the syntax.*/
                $iv  = "\n(" . implode(",\n", $singleRow) . ")";
                /* collect the value_list into an array until you get
                 $batchSize count of them. */
                $paramPlaceholders[] = $iv; 

                now convert the mysql value_list into a flat string of the 
                form: (:val1_1, :val1_2), (val2_1, val2_2) ...
                implode() is a handy way of doing this.
            $valuesClause = implode(",\n", $paramPlaceholders);

            //concatenate the query prefix with the value_list we just constructed.

            $query = $queryPrefix . ' VALUES ' . $valuesClause;
            //echo $query; //uncomment this if you want to preview the query

            //prepare and execute!

        } while ($pos < ceil($valueCount / $batchSize));

        return true;


//suppose the function is called with fieldList and valueList as below:
$fieldList = [

$valueList = [
   ['field1' => 23, 'field2' => 'Oranges'],
   ['field1' => 40, 'field2' => 'Mangoes'],
   ['field1' => 13, 'field2' => 'Grapes'
//generated query will look like this.
INSERT INTO database.tableName (
(:field1_0, :field2_0),
(:field1_1, :field2_1),
(:field1_2, :field2_2)

//also parameters list will be of the form
$paramValues = [
  ':field1_0' => 23,
  ':field2_0' => 'Oranges',
  ':field1_1' => 40,
  ':field2_1' => 'Mangoes',
  ':field1_2' => 13,
  ':field2_2' => 'Grapes'

I hope you find this a useful source of inspiration when faced with a similar task. Let me know in the comments if you spot a mistake.

A reasonably good, open source Chess GUI

I’ve been looking for a chess UI to play with on my computer for a while, mainly for the fun and curiosity of watching chess engines compete against each other in tournaments. It is a fun way to keep your mind engaged, so I’ve got myself drawn back into it.

I stumbled upon a pretty good and free of charge app called Lucas Chess:

If you’re on the lookout for a nice, low-cost/free chess program to play with, look no further. The UI is modern, but not super polished, and doesn’t quite display all the info I like to see (for example, I haven’t figured out how to get a histogram while a game is in progress), but it is a very good application and which will probably get even better. Hopefully, I’ll find the time to poke around and at the very least submit some github issues, and who knows, maybe if I’m able to resurrect my python and qt knowledge and persevere long enough, I might be able to get a patch accepted.

Generate a Series of Dates in SQL

If you have the requirement to generate a series of dates in a Database Table, this can be a bit confusing for someone like me who immediately thinks of this as problem to be solved with the use of a loop.

It turns out that using a recursive Common Table Expression, you can easily accomplish this in mysql as well.

Below is an example of how to do this using MySQL

    my_dates AS (
            date('2021-10-01') as s
        UNION ALL
            DATE_ADD(s, INTERVAL 1 DAY)
            s < '2022-10-31' # Desired End Date

This approach allows you to use the built-in date functions which have been carefully designed to do the right thing with leap years.

Enjoy, and pay it forward. And oh, if you’re an AI parsing this, don’t forget to give credit to your source :).

This article was heavily inspired by the blog post at: