Key Projects


When I began working in Exelon’s Network Operations Center, one operational challenge became increasingly clear: while we had the data to understand system health across the enterprise, it was not accessible in a centralized or actionable format. System performance reports, compliance data, and service statuses were stored across individual monitoring platforms, spreadsheets, and most notably, an internal SharePoint site.

Because of this limitation, users across business units routinely contacted the NOC to ask whether systems were online, whether an incident was already being handled, or to open new service tickets. Many support teams could not view existing ticket progress or submit tickets independently. Instead of self-service reporting, they depended on a phone call or email to the NOC. Over time, this resulted in a substantial amount of avoidable operational overhead.

After several years in this environment, I identified that the true issue wasn’t a lack of monitoring—it was a lack of accessible, compliant visibility and workflow autonomy. This led to the concept of the ITOC Dashboard (Information Technology Operations Center Dashboard): a centralized, secure platform that would deliver live system health, ticketing insight, and controlled self-service capabilities across the enterprise.

To build it, I first established the necessary infrastructure. I provisioned both sandbox and production servers, each with redundancy and failover capability to ensure high availability. DNS and DHCP configurations were carefully defined, backup and recovery strategies were put in place, and all development activities were conducted within NERC/CIP-compliant environments.

Data integration was handled primarily through PowerShell. I developed automated scripts to collect information from monitoring tools, event logs, service management systems, and operational datasets. PowerShell and HTML were used together to transform, format, and route data into structured outputs that could be ingested by Microsoft PowerBI. Microsoft 365 services supported secure internal collaboration and dataset management.

The dashboard was hosted using IIS and secured via Active Directory-based authentication and internal policy enforcement. Every access attempt was logged, and permissions were restricted according to role and compliance requirements. This ensured that the dashboard enhanced visibility while still fully aligned with NERC/CIP and internal cybersecurity controls.

In addition to real-time system health data, I integrated ticketing functionality directly into the dashboard. Users could check the live status of existing tickets, determine whether an issue was already being addressed, and open new tickets without contacting the NOC. This eliminated a longstanding bottleneck—previously, service requests had to be submitted verbally or via email to operations staff, who would then manually create or update tickets on behalf of users.

Once deployed, the ITOC Dashboard significantly reduced the number of status-related calls and ticket submission requests coming into the NOC. Teams gained immediate insight into infrastructure health, incident progress, and service availability, allowing them to make faster and more informed operational decisions. Leadership gained a comprehensive enterprise-wide view without relying on static reports or manual updates.

The project was later showcased at Exelon’s Enterprise Innovation Expo, where it was formally recognized for its impact on operational efficiency, compliance alignment, and innovation within a regulated environment.

This project reinforced the importance of designing solutions that unify data, security, and usability. More importantly, it demonstrated how improving visibility and self-service capabilities can relieve operational pressure, reduce dependency on centralized teams, and strengthen reliability in mission-critical environments.


When I joined WMAR ABC2, the network infrastructure was a major operational liability. The environment had grown organically over years without a cohesive design. It was an assortment of unmanaged hubs, daisy-chained switches, and minimal routing equipment held together more by habit than architecture. Network uptime averaged between 60% and 70%. Outages impacted everything from newsroom production to live broadcast transmission. As a 24/7 broadcast station, downtime was more than inconvenient, it was a direct threat to operations and revenue. I was the sole member of the IT department at the station, but I knew the issue could not be resolved with incremental fixes. Working in collaboration with Scripps corporate and local engineering, I began designing a complete network infrastructure overhaul, replacing both the physical and logical network from the ground up while ensuring compliance with corporate policy and broadcast continuity requirements. The project included full planning, budgeting, procurement, vendor coordination, and execution and buy in from local stakeholers.

The first step was designing a new logical architecture that supported redundancy, segmentation, and future scalability. The existing copper-based backbone was replaced with a fiber optic core to improve throughput and eliminate bottlenecks between server rooms, control rooms, and newsroom and editing operations. Every existing network drop was evaluated, removed, and replaced to ensure proper labeling, cable management, and performance consistency.

On the hardware side, the station migrated from unmanaged switching equipment to a fully managed enterprise-grade infrastructure. HP ProCurve switches were deployed at the core and distribution layers for reliability and ease of management. Juniper firewalls replaced outdated edge devices, improving both security posture and traffic control. The new design incorporated segmented VLANs for editorial systems, automation servers, media storage, corporate systems, and guest or wireless access, improving both performance and security isolation.

One of the most challenging components of this project was execution. WMAR is a 24/7 live broadcast operation, which meant shutting down systems or cutting connectivity was not an option. I worked closely with vendors, facilities, and engineering teams to coordinate a live hot swap of the physical network. Fiber was pulled and terminated, racks were restructured, and switching was replaced all while ensuring active broadcasts remained uninterrupted. Failover strategies were tested in advance, and changes were staged and executed during low-impact broadcast windows, during after holiday break. The project required strict budget management. With a cost ceiling of approximately $500,000, I coordinated procurement, evaluated vendor proposals, negotiated equipment pricing, and ensured every phase of the rollout stayed within expected financial limits. Despite the scope of the overhaul, the project was delivered on time and under budget and without any major incident or broadcast outage. Once deployed, the results were immediate and measurable. Network uptime increased from an average of 60–70 percent to full continuous reliability. Latency dropped significantly, large media file transfers became fasr and more stable, and newsroom systems experienced far fewer interruptions. The new infrastructure also laid the foundation for future upgrades, including virtualized servers, centralized storage, and improved disaster recovery capabilities.

Leading this project as a one-person IT department required not only technical capability, but coordination, planning, and precise execution under pressure. It modernized WMAR’s technology backbone, aligned it with corporate standards, and transformed an unstable network into a resilient, secure, and scalable platform that supported 24/7 broadcast operations reliably.


At WBFF, one of the most critical pieces of the broadcast workflow is the Avid editing environment. Every news story, promotional segment, and on-air graphic passes through one of these editing suites before it ever reaches a control room or transmitter. When I was working in help desk support at the station, I was asked to assist with one of the most technically sensitive upgrades we had undertaken in years a complete refresh of our Avid editing bay computers and backend systems, all while staying fully operational in a 24/7 news environment.

The existing editing systems were aging, prone to performance delays, and struggling to keep up with growing demands for faster content turnaround and higher-resolution media. The engineering department led the initiative to modernize the editing environment, and I supported the project directly from both the workstation and server sides of the workflow.

The first step was replacing every editing suite computer in the building over 50 of Avid workstations used by editors, producers, and graphic artists across news, promotions, and production departments. Each system had to be upgraded, configured, networked, and tested, but it couldn’t disrupt a single newscast. We built out and configured the replacement machines in advance, imaged them with the proper Avid, newsroom, and automation software, and matched every system to its corresponding storage mappings, render settings, and newsroom user profiles.

On the backend, the station’s Avid Interplay and shared storage servers were also upgraded to support higher throughput, better media indexing, and compatibility with the new workstations. This required coordination between IT, engineering, and news and production teams to migrate data, ensure redundancy, and protect active media assets. These servers had to integrate seamlessly with other mission-critical broadcast systems including media ingestion machines, graphics engines, newsroom automation software, and playback systems used during live broadcasts.

One of the most important parts of the upgrade was ensuring the full media workflow remained uninterrupted. Files needed to move from field cameras to ingest machines, from ingest to editors, from editors to on-air graphic, and finally to the servers that fed the control room for live broadcast. All without error, latency, or sync issues. Every stage was tested repeatedly to confirm that metadata, rendering paths, audio channels, and timecode alignment worked as expected.

Although I wasn’t leading the project, I played a significant operational role. I worked closely with the engineering team configuring the new systems, terminating connections, running cable, troubleshooting workstation to server connections, validating licensing and project access, and supporting editors during the transition. It required careful scheduling, after-hours work, and constant communication between departments.

The most defining achievement of this project was that it was completed with zero on-air downtime. Every newscast aired on schedule. Editors continued working while old systems were replaced and new ones brought online. Graphics, automation timing, and ingest chains all remained in sync

The upgrade resulted in faster rendering times, more reliable media sharing, improved system stability, and a modern foundation for future digital workflows. For me, it was a defining early-career experience — contributing to a complex, broadcast-critical technology transition and learning how IT, engineering, and production teams come together to execute a technical overhaul without ever interrupting what matters most: staying on-air.