[{"content":" ██████╗ ██████╗ ██████╗ ██╗ ██╗███████╗███████╗███████╗ ██╔═══██╗╚════██╗██╔══██╗██║ ██╔╝██╔════╝██╔════╝██╔════╝ ██║ ██║ █████╔╝██████╔╝█████╔╝ ███████╗█████╗ ███████╗ ██║▄▄ ██║ ╚═══██╗██╔══██╗██╔═██╗ ╚════██║██╔══╝ ╚════██║ ╚██████╔╝██████╔╝██║ ██║██║ ██╗███████║███████╗███████║ ╚══▀▀═╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝╚══════╝╚══════╝╚══════╝ \"The future belongs to those who believe in the beauty of their dreams.\nWhat I\u0026rsquo;m About # Control engineering \u0026amp; cybernetics student at NTNU Trondheim — currently on year four of an integrated master\u0026rsquo;s. I code (sometimes). When I\u0026rsquo;m not studying, I\u0026rsquo;m configuring Neovim, contributing to Vortex NTNU\u0026rsquo;s autonomous underwater vehicles, or resurrecting my Arch install after a pacman -Syu.\nCurrently Working On # 🔧 Vortex Control Stack — Bringing this years drone into the stonefish simulator, Tuning the controllers for desired performance and QoL upgrades to the repo in general. ⚙️ Neovim Config — AstroVim-based setup with some modifications to plugins and keybinds 🌐 This Site — You\u0026rsquo;re looking at it ","date":"8 April 2026","externalUrl":null,"permalink":"/","section":"","summary":"","title":"","type":"page"},{"content":"","date":"8 April 2026","externalUrl":null,"permalink":"/tags/c/","section":"Tags","summary":"","title":"C","type":"tags"},{"content":"","date":"8 April 2026","externalUrl":null,"permalink":"/tags/control-systems/","section":"Tags","summary":"","title":"Control-Systems","type":"tags"},{"content":" ██████╗ ██████╗ ██╗ ██╗██████╗ ███████╗███████╗██╗ ██╗ ██████╗ ██████╗ ██╗ ██╗ ██╔════╝██╔═══██╗██║ ██║██╔══██╗██╔════╝██╔════╝██║ ██║██╔═══██╗██╔══██╗██║ ██╔╝ ██║ ██║ ██║██║ ██║██████╔╝███████╗█████╗ ██║ █╗ ██║██║ ██║██████╔╝█████╔╝ ██║ ██║ ██║██║ ██║██╔══██╗╚════██║██╔══╝ ██║███╗██║██║ ██║██╔══██╗██╔═██╗ ╚██████╗╚██████╔╝╚██████╔╝██║ ██║███████║███████╗╚███╔███╔╝╚██████╔╝██║ ██║██║ ██╗ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝╚══════╝╚══════╝ ╚══╝╚══╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝ University labs. Group efforts. The parts I found most interesting. ","date":"8 April 2026","externalUrl":null,"permalink":"/coursework/","section":"Coursework","summary":"","title":"Coursework","type":"coursework"},{"content":" TTK4235 — Embedded Systems (2024) # Single elevator controller in C\nImplement a working elevator controller for a physical single-elevator rig in C, designed from UML diagrams which were produced before the code.\nArchitecture # The controller is event-driven. A state machine handles the elevator\u0026rsquo;s top-level behaviour across four modes: idle (stationary at a floor, no pending requests), moving (travelling up or down toward the next request), door open (serving a floor, on a 3-second timer before closing), and emergency stop (triggered by the stop button or a sustained obstruction). Transitions between states are driven by sensor events, floor sensors, button presses, the obstruction switch, and the stop button.\nPending floor requests are stored in a doubly-linked list queue. It is well to note that this is by far not the most optimal approach by any reasonable metric, and the linked list data structure was chosen in order to maximize learning. The data in each node of the linked list is a Request struct holding a floor number, a direction, and an off flag indicating whether someone is exiting at that floor:\ntypedef struct Request { struct Request *pNextRequest; struct Request *pPrevRequest; int floor; ButtonType direction; bool off; } Request; typedef struct Queue { Request *head; Request *tail; int numberOfNodes; } Queue; The queue uses sentinels; both head and tail, which in turn simplify edge-case handling by making sure that any insertion never has to handle special-cases. New requests are inserted in sorted order using Where_To_Attach_Request, which walks the list from the current position and decides whether to attach before or after each node based on the elevator\u0026rsquo;s current floor, direction of travel, and the requested floor. The result is that requests are serviced in the order the elevator naturally encounters them rather than in arrival order, meaning no unnecessary reversals.\nThe full design was documented in UML before implementation as part of the learning outcome of the course. Class diagrams for module interfaces (Queue, Motor, Buttons, SensorData, ButtonHandler, Door), state diagrams for the elevator lifecycle, and sequence diagrams for key scenarios including door obstruction and power loss mid-travel.\nTTT4275 — Estimation, Detection \u0026amp; Classification (2025) # Linear classifiers and nearest-neighbor methods from scratch in Python\nTwo classification problems: the Iris dataset (3 classes, 4 features) and MNIST handwritten digits (10 classes, 784 features). No library classifiers utilizing gradient descent, KNN, and K-means all implemented from scratch in NumPy.\nLinear Classifier on Iris # A linear classifier $g(x) = Wx + w_0$ where each of the 3 classes gets its own discriminant function (row of $W$). At inference, the class with the highest output wins. The model is trained with gradient descent on the MSE loss between the sigmoid-activated outputs and one-hot targets:\n$$ \\mathcal{L} = \\frac{1}{N} \\sum_{i=1}^N \\left\\| \\sigma(Wx_i + w_0) - t_i \\right\\|^2 $$The gradient with respect to $W$:\n$$ \\nabla_W \\mathcal{L} = \\frac{1}{N} \\sum_{i=1}^N \\left( g(x_i) - t_i \\right) x_i^\\top $$Applying the sigmoid $\\sigma(z) = \\frac{1}{1+e^{-z}}$ before computing the MSE softens the gradient near decision boundaries. Without it, outputs far from the boundary produce large gradients and training diverges early. The sigmoid makes the loss landscape more tractable.\nStep length $\\alpha$ had to be tuned carefully. We swept from $10^{-4}$ to $10^{-1}$ and plotted MSE vs iteration count. Below a threshold, the gradient steps were so small that convergence took thousands of iterations with diminishing returns; above it, the loss oscillated and never settled. The usable range was a roughly one order of magnitude window around $10^{-2}$.\nResults: 93–97% accuracy on the test set depending on train/test split. Setosa was classified perfectly in every run — it is linearly separable from the other two in almost any feature pair. Versicolor and Virginica overlap significantly in all four features; the confusion matrices show that almost all misclassifications happen on that boundary. A linear classifier cannot fully separate them regardless of how well it is trained.\nKNN + K-Means on MNIST # 1-NN classification on 28×28 flattened images: for each test image, compute Euclidean distance to all 60,000 training images and assign the label of the nearest neighbor. This achieved 96.9% accuracy on the MNIST test set with no learned representation — raw pixel distance carries a surprising amount of class information for digit images.\nThe cost is quadratic in training set size at inference: 60,000 distance computations per query. To make this tractable, the training set was compressed using K-means clustering — each class\u0026rsquo;s examples are replaced by $K$ cluster centroids:\n$$ \\mu_k = \\frac{1}{|C_k|} \\sum_{x \\in C_k} x $$Centroids were initialized with K-means++ (choosing each new center proportional to its squared distance from the nearest existing center) rather than random initialization, which improved cluster quality and convergence speed. The final centroids are visually interpretable: at $K = 64$, each digit class has 64 prototype images that together span the variation in writing style for that digit — thick strokes, thin strokes, different slants.\nWith $K = 64$ centroids per class (640 total prototypes vs. 60,000 training images), accuracy dropped to 94.2% — a 2.7 percentage point loss in exchange for a 240× speedup at inference. The accuracy-vs-$K$ curve had a clear structure: below $K = 32$, centroids were too coarse to represent within-class variation and accuracy dropped sharply; above $K = 128$, additional centroids added marginal improvement at increasing memory cost.\nThe most confused digit pair was 9 and 4, consistent across all $K$ values and classifier variants. Visually, ambiguous 9s have a closed loop with a vertical stroke that closely resembles certain 4 styles — the confusion is in the pixel geometry, not in the classifier.\n","date":"8 April 2026","externalUrl":null,"permalink":"/coursework/elevator/","section":"Coursework","summary":"A C elevator controller (TTK4235), and linear classifiers built from scratch in Python (TTT4275).","title":"Elevators \u0026 Classification","type":"coursework"},{"content":"","date":"8 April 2026","externalUrl":null,"permalink":"/tags/embedded-systems/","section":"Tags","summary":"","title":"Embedded-Systems","type":"tags"},{"content":" Both courses use the same physical rig — two rotors, three encoders, one IMU, bolted to a table. TTK4115 focuses on stabilization and state estimation; TTK4135 on computing and tracking optimal trajectories. The plant is a coupled 3-DOF system with travel $\\lambda$, pitch $p$, and elevation $e$ as states. All designs are based on linearized models around hover.\nTTK4115 — Linear System Theory (2024) # Four labs, each building on the previous.\nPart 1 — Monovariable PD (Pole Placement) # The first lab isolated the travel axis. In the linearized model, travel acceleration is proportional to pitch angle:\n$$ \\ddot{\\lambda} = K_{pp} \\, p $$A PD controller was used to drive the travel error to zero, with pitch as the effective input. The gains were computed analytically: specify two desired closed-loop poles, expand the characteristic polynomial, match coefficients. The transfer function from pitch setpoint to travel gives a second-order closed-loop system, so placing both poles determines $K_p$ and $K_d$ uniquely.\nPart 2 — Multivariable LQR # The second lab replaced the single-axis PD with a full-state LQR covering travel, pitch, and elevation simultaneously. The LQR minimizes:\n$$ J = \\int_0^\\infty \\left( x^\\top Q x + u^\\top R u \\right) dt $$The optimal gain matrix $K$ is found by solving the continuous-time algebraic Riccati equation (CARE):\n$$ A^\\top P + PA - PBR^{-1}B^\\top P + Q = 0 \\qquad K = R^{-1}B^\\top P $$The matrices $Q$ and $R$ are design parameters. $Q$ penalizes state error meaning large diagonal entries on a state mean the controller works hard to keep it near zero. $R$ penalizes input magnitude, so a large $R$ produces conservative, energy-efficient inputs at the cost of slower tracking.\nWe ran experiments at both extremes. High $Q$, low $R$: tracking was tight and the step responses looked good, but the inputs were saturating frequently and the rig vibrated noticeably. Low $Q$, high $R$: the rig barely responded to reference changes, taking several seconds to reach setpoints. Final tuning balanced both tracking setpoints cleanly within a reasonable time without saturating the actuators. The coupling between axes that was visible in Part 1 was now handled directly by the multivariable structure.\nPart 3 — Luenberger Observer # The rig does not give full state measurements. The IMU provides angular velocity directly; angles (pitch, elevation) come from encoders, which are noisy at low speeds; travel rate is differentiated from encoder counts. Feeding these raw signals directly into the LQR produces noisy control inputs that cause motor chatter.\nThe solution is a Luenberger observer which is essentially a parallel model of the plant driven by the same inputs as the original. Together the luenberger observer and the original system can run in tandem with the goal of minimizing the estimation error:\n$$ \\dot{\\hat{x}} = A\\hat{x} + Bu + L(y - C\\hat{x}) $$The observer gain $L$ is chosen to place the observer poles faster than the controller poles, so the state estimate converges before the controller acts on it. This is the separation principle: controller and observer can be designed independently and combined without stability loss.\nThe key design decision is which states to include in the observer model. If the observer state vector is smaller than the actual plant state, for example omitting a coupling term, then the innovation term $L(y - C\\hat{x})$ is trying to correct an incomplete model. The estimation error grows unbounded because the missing dynamics can\u0026rsquo;t be compensated by a fixed gain. This showed up clearly in the experimental data: a minimal state observer had estimation error growing to order $10^3$, while the full-state observer tracked the true states cleanly.\nPart 4 — Kalman Filter # The Luenberger observer uses a fixed gain $L$ designed for nominal conditions. It works well in steady-state but doesn\u0026rsquo;t adapt when noise levels change. The Kalman filter replaces the fixed gain with one that is optimal given the current noise covariances.\nThe filter models process noise $Q_w$ (uncertainty in the plant model) and measurement noise $R_v$ (sensor noise). At each timestep it runs two steps:\nPrediction: propagate the state estimate forward using the dynamics model.\nUpdate: correct the prediction using the new measurement, weighted by how much you trust the measurement relative to the model:\n$$ K = P C^\\top (C P C^\\top + R_v)^{-1} $$The Kalman gain $K$ is recomputed (or precomputed offline for a time-invariant system by solving the discrete Riccati equation) so that the correction is small when $R_v$ is large (noisy sensors -\u0026gt; lean more on the model) and large when $P$ is large (uncertain model -\u0026gt; lean more on the measurement).\nUnder larger disturbances where the model diverged from reality, the Kalman filter consistently outperformed the Luenberger observer, and the adaptive weighting absorbed disturbances the fixed-gain observer amplified into the control signal. The state estimates were smoother, and the resulting control inputs were less noisy.\nTTK4135 — Optimization \u0026amp; Control (2025) # Same rig, different objective. Rather than stabilizing around a fixed setpoint, the goal is to find an optimal input sequence that drives the helicopter from one configuration to another while minimizing a cost, subject to actuator constraints. The four labs increase in complexity from a pure open-loop QP to a nonlinear constrained problem.\nPart 1 — Open-Loop QP # The objective is to find the input sequence $\\mathbf{u} = [u_0, \\ldots, u_{N-1}]^\\top$ that minimizes a quadratic cost over a finite horizon while respecting actuator limits:\n$$ \\min_{\\mathbf{u}} \\; \\frac{1}{2} \\mathbf{u}^\\top G \\mathbf{u} + c^\\top \\mathbf{u} \\qquad \\text{s.t.} \\quad u_{\\min} \\mathbf{1} \\le \\mathbf{u} \\le u_{\\max} \\mathbf{1} $$The matrices $G$ and $c$ are assembled analytically by rolling out the linearized dynamics $N$ steps. Defining the prediction model as:\n$$ \\mathbf{x} = A_d x_0 + B_d \\mathbf{u} $$and substituting into the quadratic cost in $x$ and $u$, the result is a standard QP in $\\mathbf{u}$ alone. $G$ is positive definite by construction, so the problem is strictly convex and has a unique global minimum. The assembled matrices were passed to MATLAB\u0026rsquo;s quadprog.\nThe cost weight $q$ (on state deviation) was varied across three values: $q = 0.12$, $1.2$, $12$. Low $q$ penalizes state error lightly. The optimizer in turn finds a slow, fuel-efficient trajectory that reaches the target gradually. High $q$ penalizes deviation heavily and the trajectory reaches the target quickly but demands much larger inputs. The actuator constraints were active (hit the bound) in the high-$q$ case, showing that the optimizer was pushing as hard as physically allowed.\nPart 2 — LQR Feedback on Optimal Trajectory # An open-loop trajectory computed offline is fragile. Any disturbance or model mismatch causes the real system to deviate from the planned path, and the open-loop sequence has no mechanism to correct it and errors accumulate over the horizon.\nThe fix is to add a feedback layer. The precomputed trajectory gives a nominal state sequence $x^*_k$ and input sequence $u^*_k$. An LQR regulator is designed to reject deviations from this reference by working in deviation coordinates $\\delta x = x - x^*$, $\\delta u = u - u^*$:\n$$ \\delta u_k = -K \\, \\delta x_k \\qquad u_k = u^*_k + \\delta u_k $$The LQR gain $K$ is computed on the same linearized model as the trajectory. The result is a two-layer structure: the trajectory planner sets the nominal path offline, and the LQR stabilizer runs online to absorb disturbances. In practice the rig tracked the reference significantly better with feedback active, because, without it, the helicopter drifted noticeably within a few seconds of the maneuver starting.\nPart 3 — MPC Formulation # Model Predictive Control generalizes the approach from Part 2 by solving the optimization problem online at every timestep, using the current measured state as the initial condition. The horizon shifts forward by one step each time and only the first control input is applied, then the problem is re-solved with fresh measurements.\n$$ \\min_{u_0, \\ldots, u_{N-1}} \\sum_{k=0}^{N-1} \\left( x_k^\\top Q x_k + u_k^\\top R u_k \\right) + x_N^\\top P x_N $$The terminal cost $x_N^\\top P x_N$ is typically chosen as the LQR cost-to-go (the solution to the infinite-horizon LQR), which gives a stability guarantee for the closed-loop MPC under mild conditions.\nThe key advantage over the open-loop + LQR structure is that MPC re-plans at every step based on the actual current state, so it corrects for disturbances and model error automatically through the optimization rather than through a separate feedback law. Constraints on inputs and states are handled directly inside the QP therefore there\u0026rsquo;s no separate saturation logic.\nThe computational cost scales with horizon length $N$: longer horizons give better performance and stability margins but require solving a larger QP. For this lab, the solver ran comfortably within the 10ms sample interval across all tested horizon lengths.\nPart 4 — Nonlinear SQP with Elevation Constraint # The final lab adds a hard constraint: the helicopter\u0026rsquo;s elevation $e(t)$ must remain above a threshold $e_{\\min}$ for all $t$ in the maneuver. This represents terrain avoidance or a clearance requirement:\n$$ e_k \\ge e_{\\min} \\qquad \\forall \\, k \\in \\{0, \\ldots, N\\} $$Since $e_k$ is a nonlinear function of the inputs through the plant dynamics, this constraint is nonlinear in $\\mathbf{u}$. The problem is no longer convex, and quadprog cannot handle it. The course-prescribed solver is MATLAB\u0026rsquo;s fmincon, which solves the nonlinear program using Sequential Quadratic Programming (SQP) internally.\nSQP works by linearizing the constraints at the current iterate and solving the resulting local QP to get a search direction, then taking a step along that direction (with a line search) and repeating. It converges to a local minimum satisfying the KKT conditions. For this problem, with a single inequality constraint on elevation, convergence was consistent and the resulting trajectory arced over the elevation floor rather than cutting through. the optimizer found that the least-cost path respecting the constraint was to climb, complete the travel maneuver at altitude, then descend.\n","date":"8 April 2026","externalUrl":null,"permalink":"/coursework/helicopter/","section":"Coursework","summary":"Two courses, one rig: pole placement and Kalman filtering in TTK4115, then optimal trajectory control with QP, MPC, and SQP in TTK4135.","title":"Helicopter Labs","type":"coursework"},{"content":"","date":"8 April 2026","externalUrl":null,"permalink":"/tags/kalman-filter/","section":"Tags","summary":"","title":"Kalman-Filter","type":"tags"},{"content":"","date":"8 April 2026","externalUrl":null,"permalink":"/tags/lqr/","section":"Tags","summary":"","title":"LQR","type":"tags"},{"content":"","date":"8 April 2026","externalUrl":null,"permalink":"/tags/machine-learning/","section":"Tags","summary":"","title":"Machine-Learning","type":"tags"},{"content":"","date":"8 April 2026","externalUrl":null,"permalink":"/tags/matlab/","section":"Tags","summary":"","title":"MATLAB","type":"tags"},{"content":"","date":"8 April 2026","externalUrl":null,"permalink":"/tags/mpc/","section":"Tags","summary":"","title":"MPC","type":"tags"},{"content":"","date":"8 April 2026","externalUrl":null,"permalink":"/tags/optimization/","section":"Tags","summary":"","title":"Optimization","type":"tags"},{"content":"","date":"8 April 2026","externalUrl":null,"permalink":"/tags/python/","section":"Tags","summary":"","title":"Python","type":"tags"},{"content":"","date":"8 April 2026","externalUrl":null,"permalink":"/tags/simulink/","section":"Tags","summary":"","title":"Simulink","type":"tags"},{"content":"","date":"8 April 2026","externalUrl":null,"permalink":"/tags/sqp/","section":"Tags","summary":"","title":"SQP","type":"tags"},{"content":"","date":"8 April 2026","externalUrl":null,"permalink":"/tags/","section":"Tags","summary":"","title":"Tags","type":"tags"},{"content":" ██████╗ ██████╗ ██████╗ ██╗███████╗ ██████╗████████╗███████╗ ██╔══██╗██╔══██╗██╔═══██╗ ██║██╔════╝██╔════╝╚══██╔══╝██╔════╝ ██████╔╝██████╔╝██║ ██║ ██║█████╗ ██║ ██║ ███████╗ ██╔═══╝ ██╔══██╗██║ ██║██ ██║██╔══╝ ██║ ██║ ╚════██║ ██║ ██║ ██║╚██████╔╝╚█████╔╝███████╗╚██████╗ ██║ ███████║ ╚═╝ ╚═╝ ╚═╝ ╚═════╝ ╚════╝ ╚══════╝ ╚═════╝ ╚═╝ ╚══════╝ Things I've built, broken, and occasionally fixed. ","date":"5 April 2026","externalUrl":null,"permalink":"/projects/","section":"","summary":"","title":"","type":"projects"},{"content":" ██████╗ ██╗ ██╗ █████╗ ████████╗███████╗██████╗ ███╗ ██╗██╗ ██████╗ ███╗ ██╗ ██████╗ ██████╗ ██╔═══██╗██║ ██║██╔══██╗╚══██╔══╝██╔════╝██╔══██╗████╗ ██║██║██╔═══██╗████╗ ██║ ██╔══██╗██╔══██╗ ██║ ██║██║ ██║███████║ ██║ █████╗ ██████╔╝██╔██╗ ██║██║██║ ██║██╔██╗ ██║ ██║ ██║██████╔╝ ██║▄▄ ██║██║ ██║██╔══██║ ██║ ██╔══╝ ██╔══██╗██║╚██╗██║██║██║ ██║██║╚██╗██║ ██║ ██║██╔═══╝ ╚██████╔╝╚██████╔╝██║ ██║ ██║ ███████╗██║ ██║██║ ╚████║██║╚██████╔╝██║ ╚████║ ██████╔╝██║ ╚══▀▀═╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚══════╝╚═╝ ╚═╝╚═╝ ╚═══╝╚═╝ ╚═════╝ ╚═╝ ╚═══╝ ╚═════╝ ╚═╝ Avoids gimbal lock, has superior numerical stability to euler angles, and utilizes error state formulation to avoid 4x3 noninvertible transformation matrix. Overview # This is an adaptive backstepping controller used for dynamic positioning (DP) of an AUV in 6DOFs. The core idea is to replace the standard Euler angle kinematics with a quaternion error-state formulation, which eliminates gimbal lock and gives a kinematic Jacobian that is invertible everywhere except at a full 180° error. Which when utilizing SSA is a scenario that simply doesn\u0026rsquo;t occur in practice. The controller was implemented in ROS2 as part of Vortex NTNU\u0026rsquo;s AUV stack.\nConventions # The proof uses the notation below. Note that in the actual implemented code, the names differ from the mathematical derivation:\nMath Code Meaning $J_e(\\eta)$ L Error-state kinematic Jacobian $\\in \\mathbb{R}^{6 \\times 6}$ $T_e(q_e)$ Q Quaternion-to-angular-velocity transformation matrix All other notation follows Fossen (2021). The unit quaternion is written $q = \\begin{bmatrix} \\eta \\\\ \\varepsilon \\end{bmatrix}$ with $\\eta \\in \\mathbb{R}$, $\\varepsilon \\in \\mathbb{R}^3$, and $\\|q\\|^2 = 1$.\nAUV Dynamics # The AUV is governed by two coupled differential equations.\nThe kinematic equation maps body-frame velocities to NED-frame velocities and angular rates: $$\\dot{\\eta} = J_q(\\eta)\\nu \\tag{A}$$ The dynamic equation (Newton-Euler in body frame): $$M\\dot{\\nu} + C(\\nu)\\nu + D(\\nu)\\nu + g(\\eta) + g_0 = \\tau + \\tau_{\\text{dist}} \\tag{B}$$where the full state is $\\eta = \\begin{bmatrix} p^n \\\\ q^n \\end{bmatrix}$ with $p^n = [x_n, y_n, z_n]^\\top$ the NED pose, and $\\nu = \\begin{bmatrix} v_{nb}^b \\\\ \\omega_{nb}^b \\end{bmatrix}$ the body-frame linear and angular velocities.\nThe standard quaternion kinematic matrix is:\n$$J_q(\\eta) = \\begin{bmatrix} R(q_b^n) \u0026 0 \\\\ 0 \u0026 T(q_b^n) \\end{bmatrix}$$where $R(q_b^n) \\in SO(3)$ is the rotation matrix from NED to body, and $T(q_b^n)$ is derived using the quaternion derivative identity:\n$$\\dot{q} = \\frac{1}{2}\\begin{bmatrix} -\\varepsilon^\\top \\\\ \\eta I_3 + S(\\varepsilon) \\end{bmatrix} \\omega_{nb}^b = \\frac{1}{2} q_b^n \\otimes \\begin{bmatrix} 0 \\\\ \\omega_{nb}^b \\end{bmatrix}$$The problem: $J_q(\\eta) \\in \\mathbb{R}^{7 \\times 6}$ is non-square and therefore not invertible, and in our case, utilizing backstepping, we will later require the inverse of J.\nKey assumptions for the DP problem,\nA.1) The AUV is essentially neutrally buoyant s.t the terms $g(\\eta) + g_0$ are negligable\nA.2) $M = M^\\top \u003e 0$, $\\dot{M} = 0$ as stated in Fossen 2021, §12.1\nA.3) The damping is parameterised as $D(\\nu)\\nu = -Y(\\nu)\\Theta^\\star$, where $Y(\\nu) \\in \\mathbb{R}^{6 \\times 12}$ is a known velocity-dependent regressor and $\\Theta^\\star \\in \\mathbb{R}^{12}$ are the unknown damping coefficients.\nA.4) The desired pose is/ or will eventually be constant s.t $\\dot{\\eta_d} = 0$\nWith these, (B) reduces to (B*) below.\nError-State Quaternion Formulation # Instead of working with $\\eta$ directly, we define the quaternion error:\n$$\\delta q = q_d^* \\otimes q = \\begin{bmatrix} \\eta_e \\\\ \\varepsilon_e \\end{bmatrix}$$where $q^* = q^{-1}$ for unit quaternions, $\\eta_e \\in \\mathbb{R}$ is the scalar part, and $\\varepsilon_e \\in \\mathbb{R}^3$ is the vector part. Note: $\\eta_e$ here is the quaternion scalar and not the pose vector $\\eta$. This is to match Fossen\u0026rsquo;s standard overloading of the symbol.\nThe identity quaternion is $q_I = \\begin{bmatrix} 1 \\\\ 0_{3\\times 1} \\end{bmatrix}$, s.t\n$$ \\begin{align*} \\delta q \u0026\\to q_I \\quad \\text{as} \\quad q \\to q_d \\\\ \\eta_e \u0026\\to 1, \\quad \\varepsilon_e \\to 0 \\end{align*} $$From the angle-axis representation we have, $q = \\begin{bmatrix} \\cos(\\theta/2) \\\\ \\hat{n}\\sin(\\theta/2) \\end{bmatrix}$\nHere the vector part of the error quaternion satisfies $\\varepsilon_e = \\hat{n}\\sin(\\theta_e/2)$.\nAnd using the small angle approximation we can approximate $\\varepsilon_e \\approx \\tfrac{1}{2}\\hat{n}\\theta_e$, so $2\\varepsilon_e \\approx \\hat{n}\\theta_e$ directly recovers the rotation vector. We therefore take:\n$$e_q = 2\\varepsilon_e \\in \\mathbb{R}^3$$as the orientation error state. This gives a $6$-dimensional error vector $z_1$ and a square Jacobian.\nWe can also see this kinematically: defining $T_e(q_e) = \\eta_e I_3 + S(\\varepsilon_e)$ and differentiating $\\delta q = q_d^* \\otimes q$:\n$$\\dot{\\varepsilon}_e = \\tfrac{1}{2} T_e(q_e)\\,\\omega_{nb}^b \\quad \\Rightarrow \\quad \\dot{e}_q = T_e(q_e)\\,\\omega_{nb}^b$$The factor of 2 in $e_q$ will cancel out $\\tfrac{1}{2}$ from $\\dot{e}_q$ which makes it a clean linear map from $\\omega_{nb}^b$ that backstepping can invert directly.\nand equations (A) and (B) become:\n$$ \\begin{align} \\dot{\\eta} \u0026= J_e(\\eta)\\nu \\tag{A*} \\\\ M\\dot{\\nu} + C(\\nu)\\nu - Y(\\nu)\\Theta^\\star \u0026= \\tau + \\tau_d^\\star \\tag{B*} \\end{align} $$with the error-state Jacobian:\n$$J_e(\\eta) = \\begin{bmatrix} R(q_b^n) \u0026 0 \\\\ 0 \u0026 T_e(q_e) \\end{bmatrix} \\in \\mathbb{R}^{6 \\times 6}$$$J_e$ is smooth, differentiable, and invertible for $|\\delta\\theta| \u003c 180°$. Which are exactly the conditions that backstepping requires. Moreover the change from a singularity at |90°| pitch to non-invertibility at |180°| error in attitude is in practice an upgrade due to a stable DP controller implemented with SSA should never realistically recieve such extreme commands.\nWe chose $\\delta q = q_d^* \\otimes q$ rather than $q \\otimes q_d^*$, because local perturbations expressed in the body frame are more intuitive for control purposes.\nThe Backstepping Design # Step 1, Position \u0026amp; Attitude Error # Define the tracking error in $\\mathbb{R}^6$ using the error-state representation:\n$$z_1 \\doteq \\begin{bmatrix} p - p_d \\\\ e_q \\end{bmatrix} = \\begin{bmatrix} p - p_d \\\\ 2\\varepsilon_e \\end{bmatrix}$$where $\\delta q = q_d^* \\otimes q = \\begin{bmatrix}\\eta_e \\\\ \\varepsilon_e\\end{bmatrix}$, and $z_1 \\to 0$ as $(p, q) \\to (p_d, q_d)$.\nRecall A.4: $\\dot{\\eta}_d = 0$, which is a standard assumption for the DP problem.\nPropose the Lyapunov candidate function:\n$$V_1 = \\frac{1}{2} z_1^\\top z_1 \u003e 0, \\quad V_1(0) = 0$$which is positive definite and radially unbounded. Since $\\dot{z}_1 = J_e(\\eta)\\nu$ from (A*):\n$$\\dot{V}_1 = z_1^\\top \\dot{z}_1 = z_1^\\top J_e(\\eta)\\nu$$Following the backstepping procedure (Khalil §14.3), treat $\\nu$ as a virtual input and split it as $\\nu = \\alpha + z_2$ with $z_2 = \\nu - \\alpha$:\n$$\\dot{V}_1 = z_1^\\top J_e(\\eta)(\\alpha + z_2) = z_1^\\top J_e \\alpha + \\underbrace{z_1^\\top J_e z_2}_{\\text{cross term}}$$Choose the virtual control law:\n$$\\alpha = -J_e(\\eta)^{-1} K_1 z_1, \\quad K_1 = K_1^\\top \u003e 0$$so that $z_1^\\top J_e \\alpha = -z_1^\\top K_1 z_1 \u003c 0$. This gives:\n$$\\boxed{\\dot{V}_1 = -z_1^\\top K_1 z_1 + z_1^\\top J_e z_2}$$ Step 2, Velocity Error # A very common LCF for robotic systems is,\n$$V_2 = \\frac{1}{2} z_2^\\top M z_2$$Where we recall A.2: $M = M^\\top \u003e 0$, $\\dot{M} = 0$ as stated in Fossen 2021, §12.1\nDifferentiating and substituting (B*) yields,\n$$\\dot{V}_2 = z_2^\\top M(\\dot{\\nu} - \\dot{\\alpha}) = z_2^\\top\\bigl(\\tau - C(\\nu)\\nu + Y(\\nu)\\Theta^\\star + \\tau_d^\\star - M\\dot{\\alpha}\\bigr)$$Now we need to think about Cross-term cancellation.\nNote that $z_1^\\top J_e z_2$ (from Step 1) is a scalar, so $z_1^\\top J_e z_2 = z_2^\\top J_e^\\top z_1$. If the control law contains $-J_e^\\top z_1$, then the compound LFC $V_c = V_1 + V_2$ yields:\n$$\\dot{V}_c \\ni z_1^\\top J_e z_2 + z_2^\\top(-J_e^\\top z_1) = z_2^\\top J_e^\\top z_1 - z_2^\\top J_e^\\top z_1 = 0$$The cross terms cancel exactly, independent of the structure of $J_e$. If all parameters were known, choosing:\n$$\\tau_{\\text{nom}} = -J_e^\\top z_1 - K_2 z_2 + M\\dot{\\alpha} + C(\\nu)\\nu - Y(\\nu)\\Theta^\\star - \\tau_d^\\star$$yields $\\dot{V}_c = -z_1^\\top K_1 z_1 - z_2^\\top K_2 z_2 \u003c 0$, which is a strict Lyapunov candidate function.\nHowever, since $\\Theta^\\star$ and $\\tau_d^\\star$ are unknown, we attempt to cancel them out using an MRAC style direct estimation procedure.\nAdaptive Extension # Since $\\Theta^\\star$ and $\\tau_d^\\star$ are unknown, introduce two additional LCFs for the parameter errors, where $\\tilde{\\Theta} = \\hat{\\Theta} - \\Theta^\\star$ and $\\tilde{\\tau}_d = \\hat{\\tau}_d - \\tau_d^\\star$:\n$$V_\\Theta = \\frac{1}{2}\\tilde{\\Theta}^\\top \\Gamma_\\theta^{-1} \\tilde{\\Theta}, \\qquad V_d = \\frac{1}{2}\\tilde{\\tau}_d^\\top \\Gamma_d^{-1} \\tilde{\\tau}_d$$where $\\Gamma_\\theta, \\Gamma_d \u003e 0$ are diagonal adaptation gain matrices. Under A.3 ($\\Theta^\\star$ and $\\tau_d^\\star$ constant), $\\dot{\\tilde{\\Theta}} = \\dot{\\hat{\\Theta}}$ and $\\dot{\\tilde{\\tau}}_d = \\dot{\\hat{\\tau}}_d$, so:\n$$\\dot{V}_\\Theta = \\tilde{\\Theta}^\\top \\Gamma_\\theta^{-1} \\dot{\\hat{\\Theta}}, \\qquad \\dot{V}_d = \\tilde{\\tau}_d^\\top \\Gamma_d^{-1} \\dot{\\hat{\\tau}}_d$$The compound LCF is then:\n$$V = V_1 + V_2 + V_\\Theta + V_d$$Substituting the control law (to be derived):\n$$\\tau = -J_e^\\top z_1 - K_2 z_2 + M\\dot{\\alpha} + C(\\nu)\\nu - Y(\\nu)\\hat{\\Theta} - \\hat{\\tau}_d$$into $\\dot{V} = \\dot{V}_1 + \\dot{V}_2 + \\dot{V}_\\Theta + \\dot{V}_d$ and collecting all terms (cross terms cancel, $C(\\nu)\\nu$ and $M\\dot\\alpha$ cancel):\n$$ \\begin{aligned} \\dot{V} \u0026= -z_1^\\top K_1 z_1 - z_2^\\top K_2 z_2 \\\\ \u0026\\quad + \\tilde{\\Theta}^\\top\\!\\left(\\Gamma_\\theta^{-1}\\dot{\\hat{\\Theta}} - Y(\\nu)^\\top z_2\\right) \\\\ \u0026\\quad + \\tilde{\\tau}_d^\\top\\!\\left(\\Gamma_d^{-1}\\dot{\\hat{\\tau}}_d - z_2\\right) \\end{aligned} $$Zeroing the two bracketed terms gives the update laws:\n$$\\dot{\\hat{\\Theta}} = \\Gamma_\\theta\\, Y(\\nu)^\\top z_2, \\qquad \\dot{\\hat{\\tau}}_d = \\Gamma_d\\, z_2$$ Final Control Law # $$ \\boxed{\\tau = -J_e^\\top z_1 - K_2 z_2 + M\\dot{\\alpha} + C(\\nu)\\nu - Y(\\nu)\\hat{\\Theta} - \\hat{\\tau}_d} $$which gives:\n$$\\dot{V} = -z_1^\\top K_1 z_1 - z_2^\\top K_2 z_2 \u003c 0, \\quad \\forall\\,(z_1, z_2) \\neq 0 \\quad \\blacksquare$$Global asymptotic stability of $z_1 = 0$, $z_2 = 0$ follows from LaSalle\u0026rsquo;s invariance principle (Khalil §4.2) applied to the compact sub-level sets of $V$, with parameter estimates remaining bounded due to the adaptive law structure.\nComputing $\\dot{\\alpha}$ # Usually the tricky part with backstepping control laws, which many authors refer to as the explosion of dimentionality is calculating the derivative of your virtual control law.\n$$\\alpha = -J_e(\\eta)^{-1} K_1 z_1$$To get the derivative we can apply the matrix derivative identity $\\tfrac{d}{dt}(A^{-1}) = -A^{-1}\\dot{A}A^{-1}$.\n$$\\dot{\\alpha} = J_e^{-1}\\dot{J}_e J_e^{-1} K_1 z_1 - J_e^{-1} K_1 J_e \\nu$$The second term is immediate. The first requires $\\dot{J}_e$, which involves:\n$$\\dot{R}(q^n) = R(q^n) S(\\omega_{nb}^b), \\qquad \\dot{T}_e = \\dot{\\eta}_e I_3 + S(\\dot{\\varepsilon}_e)$$where $\\dot{\\eta}_e$ and $\\dot{\\varepsilon}_e$ come from differentiating $\\delta q = q_d^* \\otimes q$:\n$$\\dot{q}_e = \\begin{bmatrix} \\dot{\\eta}_e \\\\ \\dot{\\varepsilon}_e \\end{bmatrix} = \\frac{1}{2} q_e \\otimes \\begin{bmatrix} 0 \\\\ \\omega \\end{bmatrix} = \\frac{1}{2}\\begin{bmatrix} -\\varepsilon_e^\\top \\omega \\\\ (\\eta_e I_3 + S(\\varepsilon_e))\\omega \\end{bmatrix}$$Substituting back gives the compact block expression for $\\dot{J}_e$:\n$$\\dot{J}_e = \\begin{bmatrix} R\\,S(\\omega) \u0026 0 \\\\ 0 \u0026 \\tfrac{1}{2}\\bigl(S(T_e \\omega) - (\\varepsilon_e^\\top \\omega) I_3\\bigr) \\end{bmatrix}$$Unlike the Euler-angle $\\dot{T}$ (a lengthy trigonometric expression), the quaternion $\\dot{T}_e$ has a compact closed form because the quaternion kinematic equation is bilinear. The $T_e$ parameterisation was designed to make this tractable.\nNo Gimbal Lock at 90° Pitch # Euler-angle-based controllers lose a degree of freedom at ±90° pitch (gimbal lock). The quaternion error-state formulation has no such singularity and $J_e$ remains full rank for all $|\\delta\\theta| \u003c 180°$.\nBelow: the AUV commanded to hold 90° pitch in simulation, and the corresponding tracking data.\nLinks # Source code (ROS2) Vortex NTNU Sources # T. I. Fossen, Handbook of Marine Craft Hydrodynamics and Motion Control, 2nd ed. Hoboken, NJ: Wiley, 2021. ISBN 978-1-119-57510-7.\nUnit-quaternion kinematics and error quaternion (Ch. 2) Marine craft dynamics and the mass/Coriolis decomposition (Ch. 3) Backstepping control for DP (Ch. 12). H. K. Khalil, Nonlinear Systems, 3rd ed. Upper Saddle River, NJ: Prentice Hall, 2002. ISBN 978-0-13-067389-3.\nLyapunov stability analysis of nonlinear systems (Ch. 4) Barbalat\u0026rsquo;s lemma for adaptive systems (§8.4). Backstepping design for strict-feedback systems (§14.3) ","date":"5 April 2026","externalUrl":null,"permalink":"/projects/quat-dp-controller/","section":"","summary":"Adaptive backstepping controller for dynamic positioning of an AUV, using a quaternion error-state formulation to avoid gimbal lock.","title":"Adaptive Backstepping DP Controller using Quaternions for attitude","type":"projects"},{"content":"","date":"5 April 2026","externalUrl":null,"permalink":"/tags/adaptive-control/","section":"Tags","summary":"","title":"Adaptive-Control","type":"tags"},{"content":"","date":"5 April 2026","externalUrl":null,"permalink":"/tags/backstepping/","section":"Tags","summary":"","title":"Backstepping","type":"tags"},{"content":"","date":"5 April 2026","externalUrl":null,"permalink":"/tags/quaternions/","section":"Tags","summary":"","title":"Quaternions","type":"tags"},{"content":"","date":"5 April 2026","externalUrl":null,"permalink":"/tags/robotics/","section":"Tags","summary":"","title":"Robotics","type":"tags"},{"content":"","date":"5 April 2026","externalUrl":null,"permalink":"/tags/ros2/","section":"Tags","summary":"","title":"ROS2","type":"tags"},{"content":"","date":"21 March 2026","externalUrl":null,"permalink":"/tags/qp/","section":"Tags","summary":"","title":"QP","type":"tags"},{"content":" Overview # The Thrust Allocator distributes generalized control forces $\\tau \\in \\mathbb{R}^n$ to the AUV\u0026rsquo;s actuators as control inputs $u \\in \\mathbb{R}^r$. For linear systems this reduces to $\\tau = Bu$, where $B$ is the input matrix. The individual control inputs $u_i$ are passed downstream to the thruster interface.\nThis module is part of Vortex NTNU\u0026rsquo;s autonomous underwater vehicle stack.\nSolvers # Pseudoinverse Allocator # Follows the unconstrained weighted least-squares formulation from Fossen (2021, Eq. 11.27):\n$$ J = \\min_{f_e} \\; f_e^\\top W_f f_e \\qquad \\text{s.t.} \\qquad \\tau - T_e f_e = 0 $$Solving yields the generalized pseudoinverse (Fossen Eq. 11.35):\n$$ T_w^+ = W_f^{-1} T_e^\\top \\left( T_e W_f^{-1} T_e^\\top \\right)^{-1} $$Since the Orca drone uses 8 identical thrusters, there is no practical reason to weight actuators differently. With $W_f = I$, the solution simplifies to the right Moore–Penrose pseudoinverse (Fossen Eq. 11.36) Our new drone, however, will utilize 8 thrusters and they will have different maximum and minimum thrust in the forward and backward direction. To keep optimality a more advanced method could be utilized\n$$ T^+ = T_e^\\top (T_e T_e^\\top)^{-1} $$ Constrained QP Allocator # Formulated as a quadratic program following Fossen (2021, Eq. 11.38). The original formulation includes a load-balancing parameter $\\bar{f}$, but for our use case different maneuvers require some thrusters to work hard while others rest — so load balancing does more harm than good.\nThe implemented formulation drops the load-balancing term:\n$$ \\begin{aligned} J \u0026= \\min_{f_e,\\, s} \\; \\left( f_e^\\top W_f f_e + s^\\top Q s \\right) \\\\ \u0026\\text{s.t.} \\quad T_e f_e = \\tau + s, \\qquad f_{\\min} \\le f_e \\le f_{\\max} \\end{aligned} $$This retains thrust limits and soft constraint handling via the slack vector $s$ while avoiding unnecessary force redistribution.\nNotation # All formulations follow the notation of Fossen (2021), Ch. 11:\n$\\tau \\in \\mathbb{R}^n$ — desired generalized force $T_e$ — extended thruster configuration matrix $f_e$ — extended force vector $W_f \\succeq 0$ — weighting matrix on the extended force vector $Q \\succeq 0$ — weighting matrix on the slack vector $s$ $f_{\\min}, f_{\\max}$ — lower and upper bounds on $f_e$ Acknowledgements # The thrust allocation formulations are based on:\nThor I. Fossen, Handbook of Marine Craft Hydrodynamics and Motion Control, 2nd Edition, Wiley, 2021 — Chapter 11.\nLinks # Vortex NTNU Source code Thruster Interface ","date":"21 March 2026","externalUrl":null,"permalink":"/projects/thruster-allocator/","section":"","summary":"Constrained and pseudoinverse thrust allocation for Vortex NTNU’s AUV, based on Fossen’s formulations.","title":"Thrust Allocator for a 6DOF AUV","type":"projects"},{"content":"","date":"1 January 2026","externalUrl":null,"permalink":"/tags/arch/","section":"Tags","summary":"","title":"Arch","type":"tags"},{"content":"","date":"1 January 2026","externalUrl":null,"permalink":"/tags/dotfiles/","section":"Tags","summary":"","title":"Dotfiles","type":"tags"},{"content":"","date":"1 January 2026","externalUrl":null,"permalink":"/tags/linux/","section":"Tags","summary":"","title":"Linux","type":"tags"},{"content":"","date":"1 January 2026","externalUrl":null,"permalink":"/tags/lua/","section":"Tags","summary":"","title":"Lua","type":"tags"},{"content":"","date":"1 January 2026","externalUrl":null,"permalink":"/tags/neovim/","section":"Tags","summary":"","title":"Neovim","type":"tags"},{"content":" ███╗ ██╗███████╗ ██████╗ ██╗ ██╗██╗███╗ ███╗ ████╗ ██║██╔════╝██╔═══██╗██║ ██║██║████╗ ████║ ██╔██╗ ██║█████╗ ██║ ██║██║ ██║██║██╔████╔██║ ██║╚██╗██║██╔══╝ ██║ ██║╚██╗ ██╔╝██║██║╚██╔╝██║ ██║ ╚████║███████╗╚██████╔╝ ╚████╔╝ ██║██║ ╚═╝ ██║ ╚═╝ ╚═══╝╚══════╝ ╚═════╝ ╚═══╝ ╚═╝╚═╝ ╚═╝ My editor is not an IDE. It's better. The Setup # A custom Neovim configuration built on the AstroVim template, managed with lazy.nvim. The config lives in a lua/plugins/ folder structure and is designed around a productive development workflow with first-class Git integration via diffview.nvim.\nCheatsheet # Movement # Key Action w Next word start e End of word b Previous word 0 Start of line ^ First non-blank character $ End of line \u0026lt;C-d\u0026gt; / \u0026lt;C-u\u0026gt; Half page down / up \u0026lt;C-f\u0026gt; / \u0026lt;C-b\u0026gt; Full page down / up zz Center cursor gg / G Top / bottom of file Insert Mode Entry # Key Action i / a Insert before / after cursor I / A Insert at start / end of line o / O New line below / above + insert Navigation Between Files # Key Action \u0026lt;leader\u0026gt;ff Find files \u0026lt;leader\u0026gt;fF Find all files (incl hidden/ignored) \u0026lt;leader\u0026gt;fg Find git-tracked files \u0026lt;leader\u0026gt;fw Find words across project \u0026lt;leader\u0026gt;fb Find open buffers \u0026lt;leader\u0026gt;fo Recent files \u0026lt;leader\u0026gt;bb Choose buffer from timeline \u0026lt;C-o\u0026gt; / \u0026lt;C-i\u0026gt; Jump back / forward LSP # Key Action K Hover docs gd Go to definition gD Go to declaration gy Go to type definition gr List references Editing Essentials # Key Action u / \u0026lt;C-r\u0026gt; Undo / redo v / V Character / line selection gv Reselect previous selection y / yy Yank / yank whole line d / x Cut / delete char under cursor p / P Paste after / before cursor \u0026gt; / \u0026lt; Indent / unindent . Repeat last change ciw Change inner word diw Delete inner word ci\u0026quot; Change inside quotes ci( Change inside parentheses ci{ Change inside braces File Tree # Key Action \u0026lt;leader\u0026gt;e Toggle file tree Enter Open file a / d / r Add / delete / rename m / y / p Move / copy / paste q Close tree \u0026lt;Tab\u0026gt; / \u0026lt;S-Tab\u0026gt; Next / previous source Surround (mini.surround) # Key Action sa Add surrounding sd Delete surrounding sr Replace surrounding sf / sF Find surrounding right / left sh Highlight surrounding sn Update n_lines Search \u0026amp; Replace # Key Action :%s/a/b/g Replace all in file :%s/a/b/gc Replace with confirmation Merge Conflicts (diffview.nvim) # Key Action \u0026lt;leader\u0026gt;mo Choose OURS \u0026lt;leader\u0026gt;mt Choose THEIRS ]x / [x Next / prev conflict (within file) \u0026lt;Tab\u0026gt; / \u0026lt;S-Tab\u0026gt; Cycle conflicted files Workflow: merge → :DiffviewOpen → navigate → resolve → save → :DiffviewClose → git add . \u0026amp;\u0026amp; git merge --continue Links # AstroNeovim Source code ","date":"1 January 2026","externalUrl":null,"permalink":"/projects/nvim-config/","section":"","summary":"Neovim config based on Astrovim Distro, with some modifications, and some vibecoding functionality i use in day to day coding","title":"Neovim config for personal use","type":"projects"},{"content":"","date":"1 January 2026","externalUrl":null,"permalink":"/tags/ubuntu/","section":"Tags","summary":"","title":"Ubuntu","type":"tags"},{"content":" █████╗ ██████╗ ██████╗ ██╗ ██╗████████╗ ██╔══██╗██╔══██╗██╔═══██╗██║ ██║╚══██╔══╝ ███████║██████╔╝██║ ██║██║ ██║ ██║ ██╔══██║██╔══██╗██║ ██║██║ ██║ ██║ ██║ ██║██████╔╝╚██████╔╝╚██████╔╝ ██║ ╚═╝ ╚═╝╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ rm -rf doubts \u0026\u0026 make future Q3rkses # Control Engineering / Cybernetics student at NTNU Trondheim.\nI speak English, Norwegian, and Polish natively, and I\u0026rsquo;m currently studying Japanese よろしくおねがいします。\nWhen I\u0026rsquo;m not studying control theory stuff or lifting weights, I\u0026rsquo;m likely configuring Neovim or fighting my Arch install after an (accidental) sudo Pacman -Syu.\nTech I Work With # Languages — C, C++, Python, Lua, MATLAB Robotics — ROS2, CasADi, OpenCV Editor — Neovim (AstroVim + lazy.nvim) OS — Arch Linux (when it\u0026rsquo;s alive), Ubuntu (when it\u0026rsquo;s not), Yes i dislike the current state of Windows Contact # Feel free to reach out via GitHub or LinkedIn — links are on the homepage.\n","externalUrl":null,"permalink":"/about/","section":"","summary":"","title":"","type":"page"},{"content":" ███████╗███████╗ █████╗ ██████╗ ██████╗██╗ ██╗ ██╔════╝██╔════╝██╔══██╗██╔══██╗██╔════╝██║ ██║ ███████╗█████╗ ███████║██████╔╝██║ ███████║ ╚════██║██╔══╝ ██╔══██║██╔══██╗██║ ██╔══██║ ███████║███████╗██║ ██║██║ ██║╚██████╗██║ ██║ ╚══════╝╚══════╝╚═╝ ╚═╝╚═╝ ╚═╝ ╚═════╝╚═╝ ╚═╝ ","externalUrl":null,"permalink":"/search/","section":"","summary":"Search across all posts and pages","title":"","type":"page"},{"content":" ████████╗██╗ ██╗███████╗███████╗██╗███████╗ ╚══██╔══╝██║ ██║██╔════╝██╔════╝██║██╔════╝ ██║ ███████║█████╗ ███████╗██║███████╗ ██║ ██╔══██║██╔══╝ ╚════██║██║╚════██║ ██║ ██║ ██║███████╗███████║██║███████║ ╚═╝ ╚═╝ ╚═╝╚══════╝╚══════╝╚═╝╚══════╝ Work in progress — just like the thesis itself. Master\u0026rsquo;s Thesis # Programme: Control Engineering \u0026amp; Cybernetics — Integrated Master\u0026rsquo;s at ITK NTNU Trondheim\nThis page is where i will (attempt to) document the work on my master thesis.\n","externalUrl":null,"permalink":"/thesis/","section":"","summary":"","title":"","type":"page"},{"content":"","externalUrl":null,"permalink":"/categories/","section":"Categories","summary":"","title":"Categories","type":"categories"},{"content":"","externalUrl":null,"permalink":"/series/","section":"Series","summary":"","title":"Series","type":"series"}]