Meet PVM Principal Engineer Michael Roberto
We sat down with Michael Roberto, one of PVM’s highly skilled and knowledgeable principal engineers, to learn about his role with PVM and his...
We sat down with Michael Roberto, one of PVM’s highly skilled and knowledgeable principal engineers, to learn about his role with PVM and his approach to engineering.
I was always interested in STEM in terms of science and math—they were my favorite classes—and I spent a lot of time around technology. My dad was a video solutions analyst for the County Police Department, and we had a lot of tech in our house at all times. He started to collect a couple of pinball machines, and we'd work on repairing them together. Even if it was just basic soldering or looking under the board, it was always enjoyable to do. I've always been a logical and problem-solving person, and it fell right into my wheelhouse of solving problems while enjoying working on and with technology. So, I had a lot of exposure, whether it was in my house or on the high school robotics team. It felt natural and something that I just enjoyed in general.
Depending on the priorities, it can be working on data pipelines or transforming data into cleaned, normalized datasets. It could be creating visualizations like direct data entry forms that allow people to fill out information and convert it to the necessary format for storing. It could also be reviewing other people's code—we have a review process to make sure code follows certain standards and is semantically and syntactically correct.
Sometimes it’s pathfinder work—exploring technologies we've looked into, like those Palantir offers, that could potentially be supported in other contracts down the road. One example was a pathfinder exercise: “Given this technology, figure out what its use cases are.” Figure out how it works, what it can be applied to, and create demos for potential customers.
It also includes discovering ways to use Foundry to support someone’s use case. An example with one of our current projects is we were given raw binary files that we needed to upload as attachments to a Foundry Ontology object. That wasn’t something originally supported in Foundry, so we developed a solution, trying different approaches and seeing what actually worked.
That’s a long-winded way of saying it depends on the day—anywhere from dev work to pathfinder exercises to code reviews and helping others debug their work.
I spent about two years in a SCIF while working for one of our clients. Because I was in that SCIF, I wasn't in a room with other teammates. Most of the communication was over messaging—Teams, Skype, or whatever was being used at the time.
There are pros and cons. As developers, it's a little easier to work in a hybrid environment because we have tools like screen sharing and rubber ducking, where you can literally look at someone's screen and see what they’re doing instead of looking over their shoulder.
But you still have to maintain that balance of knowing when to get work done versus being communicative. Talking directly is often easier than typing it out. That’s where the tech helps—if I’m struggling to explain something, I just say, “Hey, let’s hop on a quick call.” Within 5 minutes we’ve solved it and moved on. The key is prioritizing and communicating effectively.
When I mentor people, I prefer not to give them the answer directly. I think it's important to understand what the answer is and why it’s the answer, instead of just being told.
Sometimes I see the problem right away, but I don’t think it’s effective to just say it. I help guide them to the solution themselves. It’s like in technical interviews—someone’s guiding you, giving hints. You want people to find the answer and understand why it’s correct.
I use peer coding or rubber ducking, where we're on the same screen, seeing what they’re doing, and looking at code together. I’ll give hints, guide them so they can find the problem or understand a workflow. I also show them my methodology. Everyone’s different—my way might not work for everyone—but I try to give them tools and info to succeed in engineering.
There's also gratification that comes from solving a problem without being handed the answer. I think a lot of engineers like that sense of achievement. There’s definitely a kind of positive reinforcement that comes from solving something on your own, even if you had guidance.
Besides figuring out tough technical problems, one of the biggest challenges is finding the balance between what the client wants versus what we know is the optimal solution. Clients always have something in mind, and we try to recommend what we think is best going forward. The key is being professional, courteous, and implementing their requests as best as you can.
It’s definitely a common situation—subject matter expert versus non-expert who has strong ideas. Sometimes we create quick proofs of concept to confirm if what the client wants is really what they envision. If it is, we go forward. If not, we’ve saved time.
Mostly through social media and news. There’s always some new post—new tech, new languages, and different forms of AI. I try to keep up with that.
With languages, it’s hard to stay up to date on all of them. There are so many, and each has a specific use case. It’s impossible to be proficient in all of them. In the workplace, you're usually using a specific stack—maybe 3–4 languages total.
If you're not at a company that embraces new tech or tries new things, it's easy to fall behind. And it can be really hard—or nearly impossible—to constantly update your codebase to keep up with every new release.
Adopting new releases also becomes tricky in the government space where they rely on secure patches. Some codebases or releases aren’t secure, and it can take months or even years for new versions to be approved. You have to wait, and that delay can impact your ability to stay current.
The biggest thing: don’t be afraid to ask questions.
You might spend four hours on a problem that could’ve been solved in ten minutes if you’d just asked. I’ve done that before—felt like I was bothering someone or didn’t want to distract them with a “simple” question. But everyone has questions when starting a new role, even if you’ve been in the industry for years. You still have to learn the codebase, the workflow, the systems—and it can be overwhelming without asking questions.
Questions help you expand your knowledge, understand the codebase better, know why something’s done a certain way, and find the solution you’ve been stuck on. Even experienced developers forget basic things like how to instantiate variables in different languages. Every language has its own syntax.
So ask questions. It saves headaches and helps you learn faster. I know I was afraid to at first, but 99% of the time, people are happy to help—because they’ve been there too.
Depending on the work you're doing, there are trade-offs between performance and how easy it is to write something. In school, it's usually drilled into your head to worry about performance and understand the different costs that come with sorting algorithms or nested loops, and yes, that’s very important to understand.
But it's also important to know how much data you're working with. For example, if I want to sort a number of things, I could use the most efficient algorithm—Quicksort, Merge Sort, or any of the N log N ones. That’s great if I’m working with 10 million rows. But if I'm just working with 1,000 pieces of data, I don’t really need the most efficient algorithm because the time difference is going to be in milliseconds—or less.
There are times when you definitely need to worry about performance—like when you need a transaction to happen as quickly as possible or you’re working with massive datasets. But if it’s smaller or only run once and can happen overnight, then I’m not stressing about hyper-efficiency. It’s more important to just get it done, especially if efficiency won't actually impact the client.
So I think it comes down to knowing where the code is used, how much data you're working with, and what the severity or implications of long-running functions are.
An example: when I was working with uploading binary files, I had implemented a fairly easy solution that ended up running into a one-minute timeout. I immediately realized that my current solution wasn’t going to work, so I had to find a more efficient one. That was a case where being aware of time complexity and performance mattered.
In the real world, hyper-efficiency isn’t always necessary. But once performance becomes a concern, that’s when you start applying those best practices.
It always comes down to personal preference, but readability is huge. The two biggest things I focus on are commenting my code and writing easy-to-read code.
That includes using descriptive variable names—you should be able to tell what a variable is doing just by reading it. Comments are also essential, especially when explaining complex functions or summarizing what's happening in a process.
I try not to use too many one-liners. They can be fun to write and compact, but they’re hard to read if you weren’t the one who wrote them. Chaining ternary operators or stuffing logic into one line might seem cool, but it’s often better to use a standard if-else block for clarity.
Commenting becomes crucial when you revisit code after a month and forget what it does. I’ve gone through plenty of repos with no comments at all, which makes debugging and understanding incredibly frustrating.
Naming conventions matter too. Something like lifeCounter is clear, but if you name it something cryptic like lc or use vague acronyms, it gets confusing. Even shortening variable names can be harmful if it sacrifices clarity.
Our team follows consistent naming conventions in pull requests—PascalCase, camelCase, etc.—and keep things uniform across a repo. If I’m joining a project midway, I’ll try to follow the style that’s already established, just to make it easier to digest. But yeah, codebases can be a mess when everyone writes differently so we work to be proactive to prevent those messes.
The biggest thing is thinking about security from the start. It's hard to bolt on security after the fact—it usually ends up being a patch rather than a true solution.
Understand how users are accessing data and what vulnerabilities could exist. Knowing the top ten common vulnerabilities is helpful. If you're working with SQL, always parametrize your queries. Don't trust user input—validate and sanitize it.
Also, don’t assume what kind of data is coming in. If there's a front end sending data to a back end, the back end should never just trust it. Do your own validations. Someone might pass an emoji or a non-English character that you didn’t plan for, and that can break things or cause vulnerabilities.
You should also build in explicit assumptions about the data you're expecting. Keeping security in mind during the design phase is a must.
Even with all that, it’s impossible to catch every vulnerability. Zero-day attacks exist. So the key is to be proactive and reactive: have detailed logs, rollback plans, user restrictions—anything that helps minimize impact and speed up recovery when something goes wrong.
It depends on the platform or service, but at a minimum, you need logging and some form of audit trail.
Tools like Splunk, Datadog, and other SaaS platforms are great for centralized logging and monitoring. You can run queries, perform alerts, and easily search through logs. But even if you don’t have access to those tools, plain text logs are better than nothing.
When I was working on a project for one of our clients, we had to search through massive text files to debug issues. Painful, yes, but without those logs, we’d be completely lost.
Logs help with everything—debugging, performance tracking, and especially identifying security breaches. Having a clear record of what happened, when, and by whom can be the difference between a fast recovery and complete chaos.
Also, data retention policies matter. Know what you can store and for how long. And make sure you're logging what’s actually important—not just noise.
The biggest one? People treat AI like a search engine and trust the results as absolute truth.
AI is non-deterministic—meaning the same input won’t always give you the same output. Sometimes, it just makes things up. I’ve used platform AIs that return fake libraries, reference nonexistent tools, or give outdated info.
It’s a helpful tool when used properly. I usually treat it as a starting point—use it to get ideas or surface documentation, then do my own research to validate everything before implementation.
AI can absolutely increase efficiency, but it has to be used with caution and accuracy from the user.
It’s a pros and cons analysis for IT security vulnerabilities.
All these different things that can impact whether it is needed or not.
PVM was founded in 2010 by Pat Mack, a retired Naval officer, who wanted to solve the hard, data-driven problems Sailors were facing on the front lines every day. Today, we continue to be driven by that same goal, and are focused on taking on our clients’ missions as our own to make a difference in the communities we serve. As a black-owned, service-disabled veteran-owned, and women-run small business, we bring diverse perspectives to every project.
At PVM, we're not just a team, we are a vibrant community of diverse individuals who are passionate about the problems we solve for our customers. We believe in delivering value, fostering innovation, and nurturing leadership skills, all while creating an environment that supports personal and professional growth.
Interested in joining our team? View our open positions.
We sat down with Michael Roberto, one of PVM’s highly skilled and knowledgeable principal engineers, to learn about his role with PVM and his...
In today’s government landscape, “doing more with less” has evolved from a guiding principle to a strategic imperative, if not a mandate. With...
Nearly every organization today has a strategy for collecting data. But collecting data is only the first step. The real power lies in what you do...