Kawika Joseph Tadao Dembroski

Kawiggles.com: Kawika's Hub on the Web

The Kawiggles Blog

Welcome to my blog! I tend to write and think a lot, and most of that writing and thinking takes places in my personal notes. Occasionally, though, a thought is complete or coherent enough to warrant publication. This page is a compilation of some of those writings.

Posts

On Functions and Ordering in Philosophy and Programming: Part 1

March 21st, 2026

I've labeled this post "Part 1" because I haven't really figured out all of my thoughts on the subject yet. I'd figured that it would, at the very least, be helpful to lay down some baseline information and to describe my general thought process about the subject in a preliminary post rather than forcing all of that information into one massive post. Besides, writing about this topic more formally (in the social sense) might help me work out some of my thoughts. For context, this is the third time I've attempted writing this post, meaning that the thoughts very much have not been working out.

So what is it that I'm going to be looking into? Broadly, the nature of functions and ordering. Ever since reading Gottlob Frege's Die Grundlagen der Arithmetik I've come to be fascinated by nature of functions. Functions, in most domains of math, science, and technology, are taken as givens; things which have already been figured out and are tools to be used. But the work of the early analytic philosophers, namely Bertrand Russell, Ludwig Wittegenstein, and Kurt Gödel, I believe demonstrate that this is not the case. I wish to one day discuss the intricacies of Gödel's Incompleteness Proofs, but now's not the time.

Broadly speaking, there exists two means of defining what exactly functions are; I call these the intensional and extensional definitions of functions. The intensional definition is perhaps more intuitively obvious: it states that a function is essentially a rule that, given some tuple of inputs, will produce some output. The extensional definition instead postures that a function is a set of ordered pairs or a mapping of one value to another. The key distinction between these two definitions is how they conceptualize the function itself. The intensional definition sees functions as abstract rules. The extensional definition instead sees functions as sets.

It is easy to see the utility of the intensional definition of functions. The intensional definition roughly maps onto how functions are used practically. Predicates in natural language can be seen, in some regard, as "rules" which take the input of a subject or object and output some state (philosophy of language is extremely contentious, and I would not stand behind this position. Regardless, it is a fair interpretation of the notion). Programming, especially in typed and functional languages like C and Haskell, define functions as rules which take tuple inputs (parameters) and produce outputs (return values). Two functions could have the exact same set of input/output pairs and be distinct because of how they formulate their rules. Take, for example, the insertion sort and merge sort algorithms. Both of these algorithms have identical input output pairs: they both take inputs of arrays and output sorted arrays. If you gave the same array to both algorithms, they would produce the same output. However, they should by no means be considered identical; insertion sort has a time complexity of O(n^2) to merge sort's O(n lg n), a significant consideration when writing efficient code.

The extensional definition cannot, however, be discounted, even if it stands contrary to our immediate notions. The extensional definition sees broad use in the field of pure mathematics, computational analysis, and formal analysis. It's power lies in the fact that it allows functions to be formatted in a manner compatible with broad logical theories, particularly set and class theories. If a function is an abstract object analogous to a set, then the foundational logic that has been used to construct much of modern mathematics can, and has been, similarly applied to functions, allowing for the construction of more complex abstract objects. Looking once again to big O notation for example, we often find that an algorithm's complexity function is described by a set of functions, rather than a single function itself. The formalization of: $$ f(n) \in \mathbb{O}\left (g\left (n\right ) \right ) $$ is used to describe how a growth function can be said as belonging to a set of functions which describe all the possible variations of a particular algorithm's time complexity. In other words, it is describing a function belonging to a class of complexity functions.

I used big O notation as my examples intentionally, as it reveals a contradiction which computer science in particular is in danger of realizing: our current conceptualization of the nature of functions as a whole relies on two mutually exclusive definitions of functions. In an extensional system, we would consider our two sorting algorithms to be identical, despite being foundationally distinct. But in order to mathematically codify those distinctions we need to make use of the extensional definition in order to describe groups of functions with similar growth. The contradiction has not made itself explicit only because the usage of each definition is restricted to a particular domain in which it is exclusively used. Practically, it is easy to ignore this contradiction, but I find myself being unsettled by it.

There is another, related problem when it comes to the definition of functions: the question of ordering. Ordering is required for a proper formulation of the extensional definition of functions: the input/output pairs of functions must be ordered to distinguish between the inputs and outputs of functions (a pair of (2, 1) is distinct from the pair (1, 2) when considering functions). However, a formal notion of ordering has never really been solidified. Our attempts have mainly centered around the notion of the empty set as standing in for the 0th ordinal, but it is not clear that this is what an ordinal actually is. The consequences for computer science are plain: the arguments for functions are ordered, data structures frequently contain the notion of ordering, and ordered statements determine how functions execute in a program. What's more confounding is the fact that both the intensional and extensional definitions of functions rely on a notion of ordering for formulation. Because of this, and the intuitive nature of functions performing a sequential operation with a "beginning" and "end" state, I believe that a better understanding of the nature of ordinals will provide some clue as to how the contradiction of defining functions may be resolved.

That's it for this blog post, just outlining the problem as best as I can. I do not intend to solve this problem: many people much smarter than me have attempted to do so before and failed. However, I do want to look into the problem, and in doing so explore the relationship between philosophy, math, and computer science in greater depth. In my next post, I plan on talking about the history of mathematical logic, and how this problem came to be in the first place. Until then, thanks for reading!

Back to post list

F1RST P0ST!!1/About Building this Website

March 1st, 2026

This particular meme might be over a decade old by this point, but its dated nature seems to fit the general aesthetic of this website. My apologies, by the way. I've never really been a visual person. Anyways, welcome to my blog/website! In this first blog post, I want to talk about all the things that I "had" to do in order to actually get this blog up and running.

Realistically, I did not have to take as long as I did to get this blog set up. I have had a running instance of nginx on my server for what seems like months. The only real boundaries to getting this site running were learning enough CSS to make the site look functional, and writing content for the site. I did both of these things essentially in two days, meaning that all things considered they we're not the most difficult boundaries to scale. Instead, the barrier I had to overcome was the barrier of infrastructure.

One problem that I've encountered in almost every technical project I've done where I try to implement a system is a failure to understand the scope and nature of the system before I set out developing it. For example: when I first started putting music on my media server, I got so excited about prospect that I didn't give a thought to the system by which I would name and organize the files that I was putting on the server. This mistake caused a bunch of work for me later on, when I realized that all of those files would need to be named in a certain format in order to have metadata automatically applied to the media. Similar problems are frequently encountered in coding as well, where decisions early on in a project, when not properly systematized, can cascade into a world of pain later on in the project.

Now, on it's own, the website does not represent much of a project. It is simply a collection of html and css files in a filesystem given to a static content server, stuff people have been doing since the early 90s. But this website does not exist in a vacuum. Instead, it represent the highest point of abstraction for my entire system; the "end" of a long series of abstractions, each of which is a complex system affecting the layers of abstraction higher than it. A website is then, in effect, the end product of a massive system of systems. What are some of these systems? Networking protocols, docker containers, storage arrays, development environments, and more. Relating this back to my earlier point, it's hard for me to feel comfortable developing the "end" of a system when I haven't yet figured out the foundation of that system. I am building a foundation for this website, and a lack of understanding of any part of that foundation could lead to complications further down the line, even for something as simple as a website

The first thing I had to figure out was the method by which I'd be serving content. In the early days of this project, I bounced between several different options. One potential was exposing part of my trilium notes to the public through some of the functionality built into that application, but I found no elegant solution with that method that preserved the security of my notes. Another consideration was using an open source application called Hugo to host markdown files. Unfortunately, Hugo did not offer the degree of control I required to host a full functioning website. I also briefly considered using an application called Ghost, but this option required too many external dependencies (as an aside, I've found that "beginner friendly" options for a number of services tend to cause headaches in the long run in their efforts to hide complexity from the user). I eventually settled on using nginx, as it was the simplest way to host static content, but unfortunately required me to properly learn how to build a website using HTML and CSS.

The other issue I had already been grappling with was the issue of networking. The server's internal networking was pretty stable, nginx simply took external traffic from ports 80 and 443, and, depending on the subdomain, directed that traffic to an internal port where a docker container was hosted. The point of complexity turned out to be grappling with DNS. In the days of static ip addresses, the issue of pointing DNS traffic at your server was simply a matter of pointing an A Record at your static ip. When I first set up my server, I was fortunate enough to live in a place that used a static IP address. However, by the time I was considering this project, I had moved places to a new location which used a dynamic ip address. This move had fortunately not affected a majority of my services, as they used CNAME records to direct dns traffic. But I was naturally interested in hosting this website at the root of my domain (kawiggles.com), something which was not feasible through an A record if the target ip address was consistently changing. The answer to this problem turned out to lie in the router I was using and ALIAS records. The new router which I had purchased during the move had the capability of hosing a Dynamic DNS server, which would allow domain name resolution independent of the ip address set by my internet provider. And while a typical A record cannot resolve to a ddns server address, and ALIAS record can pretend to be the root of the domain while pointing at such an address. Learning about these two concepts allowed for me to bypass the networking problem.

The most difficult and amorphous challenge which I had to solve was the question of a proper development environment. When I first began to pursue this project, I had not written any serious code, especially not independent of web based tools. While I knew it was absolutely possible for me to write HTML in notepad, the process was slow, inefficient, and error prone. And so this project was on standby until I began to pursue coding beyond just basic web development. There were two issues which composed this problem: how could I easily edit the source files for the site, and which editor should I use. My first solution to these two problems was to use Microsoft's VSCode, a standard development environment, to write html and css to a filesystem hosted on my nas. This solution was a perfectly workable solution, but I encountered issues after a few months of use. Firstly, it is difficult to use either Windows Powershell or VSCode's included terminal to make changes to a Linux filesystem mounted on a device running the Windows Operating System. Compatibility between Linux and Windows was enough of a point of frustration that I ended up moving every one of my computers to Linux. Now came another point of awkwardness: using Windows software on a Linux system. Of course, an open-source version of vscode is available, but the entire system opposed the philosophy which I was trying to implement in my switch to Linux: total transparency and control over what was being used on my computer. Downloading extensions feels clunky when a package manager like pacman is one terminal away. And why ever use VSCode's terminal when the Linux terminal is more direct? Because of this, I found myself gravitating towards using vim. Eventually I came to discover that the functionality of VSCode's perser generator, autocompletion, and LSP could be replicated using neovim and plugins. Setting up neovim took a considerable amount of time, but in the end the bare simplicity of the application and its universal compatibility with very little new setup alongside its powerful functionality makes it rewarding to use. The last issue to solve was figuring out how to more directly edit my websites files. Using the aforementioned NAS method, I was still required to make copies of the website directory inside the appdata directory for nginx. This was a slow process that could be prone to error, as each update of the website, not matter how small, would involve deleting and replacing the old files. At first I attempted to solve this problem by soft linking the nas directory to the nginx appdata directory. This unfortunately had issues, as a difference of permission between the two directories caused complications, with the nginx container being unable to use the symlink. The more direct option was instead altering the docker run command for the container, mapping the docker container's internal path to the static content to the NAS directory. Finally, because the issue of redundancy and security still persisted, I learned how to use github. A git repository for this website is available on my github profile.

Was this too much for just deploying a website? Perhaps, but I think the knowledge I gained in the process is infinitely more valuable than the product of the website itself. I've been learning to expect that learning projects of this nature will always take more way more time than I expect. But what I've also found is that understanding the full infrastructure behind a project makes building future projects of that kind much easier. In a certain sense, the only reason why it took so long was because it existed so far up the technology stack, and what really consumed my time was working my way up that technology stack. My hope is that this kind of understanding will also apply to other learning projects. For instance, I'm hoping that the time I've spent in neovim during the development of this website will assist me in the development of my actual coding projects. I suppose the moral of my story is that understanding systems as a whole is important, but I don't think that's universally true, and I don't like stories with morals anyways. Take what you will from my anecdotes. And thank you for reading!

Back to post list