Posts

CST 334 Week 8

Looking back on CST334, I can confidently say that this course pushed me in ways I didn’t expect, especially with the breadth and depth of material covered. One of my biggest takeaways is just how much is happening “under the hood” of computers that we often take for granted. Topics like inode addressing, disk access delays, memory hierarchy, and virtual address translation gave me a much clearer picture of how operating systems manage resources efficiently. Working through problems where I had to calculate inode locations, determine transfer delays based on seek and rotational latency, or translate virtual to physical addresses really showed me the importance of precision and attention to detail. These aren’t just abstract concepts; they are the backbone of how files are stored, accessed, and secured. I also found scheduling algorithms like Shortest Duration fascinating because they highlighted how small design choices in the OS can dramatically impact performance and fairness. At th...

CST334 Week 7

This week, our focus shifted significantly from the CPU and memory to the world of I/O devices and data persistence. We began with the general architecture of I/O devices, and I found it particularly interesting to trace the evolution of communication from inefficient programmed I/O, where the CPU is tied up just waiting, to more advanced techniques like interrupts and DMA that allow for true multitasking. From there, we took a deep dive into the classic hard disk drive. Understanding its physical geometry, with platters, tracks, and sectors, was key for me to grasp why its performance characteristics are so different from RAM. Breaking down disk access time into seek time, rotational delay, and transfer time made it clear that the physical movement of the disk arm is the biggest performance bottleneck, which directly explains why intelligent disk scheduling is so important. Building on that hardware foundation, the rest of the week was dedicated to the file system abstraction. We star...

CST 334 Week 5

This week was particularly challenging, not just because of midterms, but also due to the introduction of a fundamentally new and complex topic: concurrency. We began with the core idea of a thread as a single execution sequence within a process. Moving from the single-threaded model we've implicitly used so far to a multi-threaded one is a huge paradigm shift. The ability to have multiple threads running seemingly in parallel opens up incredible possibilities for performance and responsiveness, but it also introduces a whole new class of problems. Working with the thread API, specifically using pthread_create to spawn new threads and pthread_join to wait for them, provided a concrete foundation. It was one thing to hear about threads conceptually, but another entirely to write a program where different parts of the code execute independently. It immediately became clear why this is so essential for everything from responsive user interfaces to high-performance servers. The second ...

CST 334 Week 4

This week in CST 334 felt like a deep dive into the core mechanics of how an operating system manages memory. The readings, from paging and TLBs to swapping, all built upon one another and helped me understand virtual memory. Starting with OSTEP 18, the concept of paging immediately clicked as a much more elegant solution than the segmentation methods we looked at previously. Breaking both the virtual address space and physical memory into fixed-size pages seems like such a straightforward way to avoid external fragmentation and simplify allocation. However, the performance cost of requiring an extra memory access for every address translation was an obvious drawback, which made the introduction of the Translation Lookaside Buffer (TLB) easier to understand. Understanding the TLB as a specialized hardware cache that speeds up these translations was a key insight. Then, OSTEP 20 on multi-level paging addressed the next big question I had: how does the OS handle the massive page tabl...

CST 334 Week 3

This week in CST 334 we dived into the foundational concepts of memory virtualization. We started with the idea of the address space, which is the abstraction the operating system provides to a running program. It was interesting learning that every process gets its own private virtual memory, starting from address zero, completely isolated from other processes. This illusion of a large, contiguous memory space simplifies things immensely for programmers. We then connected this abstraction to the practical tools we use in C, like malloc() and free(). It was eye-opening to see the direct link between a high-level concept like an address space and the low-level memory allocation calls that can lead to so many common bugs, such as memory leaks or buffer overflows. Understanding the underlying system makes it much clearer why managing memory carefully is so critical. The OS is doing a ton of work behind the scenes to maintain this illusion, and the C memory API is our direct interface to t...

CST 334 Week 2

This week in CST 334 provided a comprehensive exploration of how an operating system manages processes and CPU scheduling. The initial lectures established the fundamental concept of a process as an active instance of a program in execution, complete with its own address space and state. The study of the C Process API, particularly fork() and exec() , offered a practical application of these concepts, demonstrating how new processes are created and managed programmatically. A key concept that unified these ideas was Limited Direct Execution, a mechanism where the OS grants a process direct access to the CPU for efficiency but retains ultimate control through system calls and traps. This model directly clarifies the nature of context switches: a voluntary switch occurs when a process makes a system call for a resource (like I/O), while an involuntary switch happens when the OS preemptively reclaims control, often due to a timer interrupt. A significant portion of the week was dedicated...

CST 334 Week 1

I learned Docker by installing Docker Desktop, verifying with docker run hello-world. I dove a little deeper into what containers were, how it mounted and what I can do inside a docker container which taught me layering and caching.  In C#, I realized high-level features hide a lot of complexity: strings are immutable and managed by the runtime, and memory allocation “just works.” To see what really happens under the hood we did an assignment based on C string library where I needed to juggle heap versus stack. The string struct itself lives on the stack or wherever i declare it but its data pointer refers to a heap buffer hat i allocate with malloc or grow with realloc. Managing the buffer safely can avoid issues and avoid memory leaks. A few new methods/concepts i've also learned on the fly was memmove where it lets me shift bytes directly inside my buffer even when the source and destination overlaps for example in my insert and erase function that I did for the string library a...