Posts

CST438 Week 8

This is the final journal entry for the course, so here are the five things that stuck with me most. 1. Code review is a shared responsibility, not a gatekeeping step. The reading from Software Engineering at Google reframed how I think about pull requests. Having distinct roles, peer engineer, code base owner, and readability approver, means everyone is looking at the code with a different lens. That structure makes reviews more useful than a quick pass for typos. 2. A clean merge does not mean working code. This was the most counterintuitive thing from the Git week. Two developers can merge with zero conflicts and still break the program if one renamed something the other was still referencing. Git tracks text, not logic. Tests have to do the rest. 3. Larger tests trade speed for fidelity. Unit tests are fast and isolated but they cannot tell you how the system behaves under real conditions. Larger tests, including exploratory and canary tests, get closer to production truth. The tra...

CST 489 - Final Journal Entry!!

Final Reflection Looking back at two years of journals, the first thing that stands out is how much the writing itself changed. Early entries from CST300 read like someone still figuring out how to balance waking up at 3am for work, studying, and being a dad. There were schedules mapped out by the hour. A lot of honesty about struggling with time management. By the end of CST489, those same constraints were still there, two jobs and a daughter with a full sports schedule, but the tone shifted. Less survival, more momentum. The technical growth is easier to trace. I went from debugging basic Java methods and learning how to use a debugger properly in CST338, to deploying a full stack application on AWS using ECS, ECR, RDS, and S3. The capstone project was something I actually built from scratch and am proud of, a RAG-powered AWS Quiz Coach with a React frontend and a Python FastAPI backend with an AI agent that explains your wrong answers. That is not a project I could have conceived, l...

CST438 Week 7

Agile and Waterfall approach software development in fundamentally different ways. Waterfall is linear. You complete each phase fully before moving to the next, requirements, design, implementation, testing, and deployment. The upside is that everything is documented and planned upfront. The downside is that by the time you ship, the requirements may have already changed and you have no easy way to adapt mid-process. Agile flips that. Instead of planning everything at the start, you work in short iterations and adjust based on feedback as you go. The documentation is lighter and the focus is on delivering working software quickly and improving it over time. From my experience at Keasy, Agile makes more sense for most real projects. Requirements change, priorities shift, and customers rarely know exactly what they want until they see something working. Waterfall assumes stability that usually does not exist.

CST489 Week 15

What project milestones did you accomplish this week? This week I completed my project. The final implementation added a RAG agent into my AWS Quiz Coach, which is built with React on the frontend and Python with FastAPI on the backend. What is your plan for next week? Next week I plan to continue building on the skills I have developed throughout my time at CSUMB, with a strong focus on AWS cloud services and deepening my cloud knowledge going forward. What challenges, if any, are you currently facing in project development? Do you need instructor assistance? The main challenge throughout this project was time. Balancing two jobs and a daughter with sports practices and games every week made it hard to find long stretches to work. That said, my CSUMB journey is wrapping up and I am looking forward to putting more energy into my career path and working toward becoming a software engineer.

CST438 Week 6

CST438 Week 6 This week's reading covered computing infrastructure and how large scale systems manage servers, containers, and workloads. The concept that clicked most for me was idempotency, the idea that issuing a request twice produces the same result as issuing it once. It sounds simple but it has real implications for building reliable distributed systems, especially when retries are involved. The section on containers versus VMs also connected directly to work I have already done. Deploying my project on AWS using ECS and Docker made the tradeoffs feel concrete rather than theoretical. Containers win on startup time and footprint, but they are not the right tool for everything, particularly around managing state. The serverless model was interesting too. The engineer just provides the code and the platform handles the rest. It removes a lot of overhead but also takes away control, which is a real tradeoff depending on what you are building.

CST489 Week 14

What project milestones did you accomplish this week? This week I completed my project. I integrated RAG into my AWS Cert Quiz app, which now includes an AI agent that explains why a selected answer is wrong and a chat feature that lets users ask follow up questions answered by the documents I fed into the RAG pipeline. What is your plan for next week? Next week I plan to deploy the project on AWS and finish whichever certifications I can before the end of the term. What challenges, if any, are you currently facing in project development? Do you need instructor assistance? No challenges at the moment and no instructor assistance needed.

CST438 Week 5

This week's reading from Software Engineering at Google covered large tests. The main takeaway is that larger tests trade speed and simplicity for fidelity. Unlike unit tests, they test how the system actually behaves in conditions closer to production, which means they catch things unit tests simply cannot. The tradeoff is that they are slower, more expensive to run, and harder to maintain. What stuck with me was that larger tests still use mocks in some cases, and that Google does not rely on full automation scripts for everything. Two key ingredients for large tests to work well are realistic seed data and production data, which makes sense because a test is only as useful as how closely it mirrors real usage.