Posts

Showing posts from April, 2025

CST 334 Week 8

Final week of the course! I wouldn’t say there was too much to learn this week as it was mostly working on the group project and studying for the final exam. However I will say researching and reading about the group project was very insightful. I read about “input and output coverage needed in file systems.” It was a good read that taught us not only what was used to test out input and output, but also why it’s the more optimal testing system.  The article focuses on file system testing, particularly why relying only on code coverage (just  checking if the code runs) isn’t enough to find real bugs. In their real ‑ world bug study of Ext4  and BtrFS, the authors manually examined 200 recent bug ‑ fix commits and discovered that  xfstests covered the relevant code for 53% (37 out of 70) of those bugs but still failed to detect  them; even more striking, 81% (57 of 70) of all bugs were triggered only by specific syscall  inputs or outputs.   As the cours...

CST 334 Week 7

This week in week 7 in CST 334, we learned about file systems IO devices, and persistent storage management. We learned that files serve as a key abstraction provided by the operating system. They are linear arrays of bytes stored on a disk permanently. The files are organized into directories which form a hierarchical structure that maps human readable file names to lower level identifiers like inode numbers. To implement a file system you need to manage various on disk structures like data blocks, nodes, allocation bitmaps, and a superblock that has high level info about the file system. inodes also store metadata like file size, ownership and block pointers. we then saw file systems layouts like system V file system, which has a centralized node table, and then the Berkeley Fast File System, FFS, which improves performance by grouping related data and meta data into cylinder groups. And to support large files nodes use techniques like multi level indexing through indirect pointers. ...

CST 338 Week 7/Week 8 Learning Journal

HW 1 Looking back at HW 1, Markov Text Generation, I do not think I would approach it any different. I think I still would go into it solving the smaller methods first, and then going method call by method call. Now I think maybe my actual coding and efficiency would've gotten a lot better as I learned more throughout the year, making for better code, but I would have approached the assignment the same.  Two Victories I know it was a while ago but the coding bat practice was something I was really proud of. Not only was it a good review for some of the extreme basics, but I genuinely learned coding I had not before. I remember going through the strings segment and actually looking at the class and its functionality and being really intrigued by what it had to offer. Same for HashMap. That was new to me and I enjoyed learning about it.  Another victory was confidence in polymorphism, abstraction, extending, and interfaces. When I first learned about it I always got them confuse...

CST 334 Week 6 Journal

This week in CST334 we learned about semaphores and the problems that arise in concurrency. TO solve a broad range of concurrency problems you need both locks and conditions. There is a synchronization primitive called a semaphore that can be a lock or condition variable. It is an object with an integer value that we can manipulate. We looked at synchronization problems like the producer consumer problem when semaphores coordinate their threads to stop buffering over or underflows. To fix this problem we have to handle it with mutual exclusion and condition signaling. Another problem is the reader and writer problem which shows the need to balance concurrent access and allowing multiple readers BUT not multiple writers, as we saw in PA5. We also learned about deadlock which occurs due to complex locking protocols. To create a deadlock 4 things HAVE to occur.  Mutual exclusion- threads claim exclusive control of the sources that they require. Like when a thread grabs a lock Hold and...

CST 334 Week 5 Learning Journal

This week in CST 334 we learned about Concurrency and Threads, Thread API, Locks and Locked Data Structures, Condition variables and synchronization. Threads share the same address spaces , where processes do not. So threads have much more efficient communication but also can lead to more problems like synchronization issues. Parts of code where shared resources are accessed are called critical sections. These sections ,ust be protected to make sure they are mutually exclusive and prevent threads from executing the sections simultaneously. One way to stop this from happening is Locks as they ensure only one thread is at a critical section. Another way is by condition variables which allow threads to wait for conditions before moving on to the next section. We also learned about indeterminate which is when an output is inconsistent and unpredictable no matter if the input is the same. Locks also help prevent this as well as atomic operations. Atomic operations are operations that make s...

CST Week 5 Journal Entry

   1.      You can work with up to three people (you MUST work with at least one other person) 1.       I worked with Sydney Stalker and Thomas Vandergroen 2.      What was your strategy for solving the assignments? 1.       My strategy was to complete all the stubs and empty methods, then after I do method by method. And if I run into a method calling another, I do that one then return back because I hate red errors.  3.      What was THEIR strategy for solving the assignments My other teammates strategies were to do stubs first in all the classes, then do the methods. (Sydney) And Thomas’s strategy is to do the methods with the least amount of calls first.  4.      How would you change your strategy having worked on the assignment? I do not think I would change the way I did my work. My strategy seemed to work and I like sticking with it...

CST 334 Week 4 Learning Journal

This week we continued learning about memory virtualization and the different types of ways operating systems virtualize memory. Segmentation is a way that chops things up in to variable sized pieces but because of this it can lead to fragmentation which causes allocation to become more challenging. Another approach would be to chop up the space into variable sized pieces, called paging. Paging divides pieces up into pages (instead of code heap, or stacks). Paging views memory as an array of fixed size slots called page frames. Each of these two has their own challenges and advantages. To record where pages are stored In the physical memory, a page table is used and it stores address translations of the pages.  There is also the hybrid approach. Which Is a combination of paging and segmentation. Thrashing can also occur when paging because the memory is overused, and the demands of the running processes exceeds the available physical memory. This leads to the system constantly pagi...