How to Handle Errors Gracefully in PowerShell Scripts
Illustration of a developer using PowerShell: readable code with try/catch, clear error messages, logging, safe cleanup and retry logic to handle failures gracefully, inform users.
Sponsor message — This article is made possible by Dargslan.com, a publisher of practical, no-fluff IT & developer workbooks.
Why Dargslan.com?
If you prefer doing over endless theory, Dargslan’s titles are built for you. Every workbook focuses on skills you can apply the same day—server hardening, Linux one-liners, PowerShell for admins, Python automation, cloud basics, and more.
Error handling in PowerShell scripts isn't just a technical requirement—it's the difference between a script that fails silently and leaves you wondering what went wrong, and one that communicates clearly, recovers gracefully, and maintains system integrity. When your automation runs in production environments, managing database operations, or orchestrating cloud resources, proper error handling becomes your safety net. Without it, you're essentially driving without brakes, hoping nothing unexpected happens along the way.
Error handling in PowerShell refers to the systematic approach of anticipating, detecting, and responding to exceptions and failures that occur during script execution. This comprehensive guide examines multiple methodologies—from basic try-catch blocks to advanced error action preferences—giving you a complete toolkit for building resilient automation. Whether you're dealing with terminating errors that halt execution or non-terminating errors that allow scripts to continue, understanding the nuances makes all the difference.
Throughout this exploration, you'll discover practical techniques for implementing robust error handling mechanisms, learn how to create meaningful error messages that actually help with troubleshooting, and understand when to use different error handling strategies. You'll also find ready-to-implement code examples, comparative tables showing different approaches, and insights into logging practices that will transform your PowerShell scripts from fragile automation into production-ready solutions.
Understanding PowerShell Error Types and Their Behavior
PowerShell distinguishes between two fundamental error categories that behave dramatically differently in your scripts. Terminating errors immediately stop execution and transfer control to error handling blocks, while non-terminating errors display a message but allow the script to continue processing subsequent commands. This distinction isn't merely academic—it fundamentally affects how you structure your error handling logic and determines whether your script can recover from failures or must abort entirely.
Terminating errors typically occur with cmdlets that use the -ErrorAction Stop parameter or when critical system failures happen. These errors trigger the exception handling mechanism, allowing you to catch them with try-catch blocks. Non-terminating errors, conversely, are PowerShell's default behavior for most cmdlet failures, designed to let scripts continue processing multiple items even when individual operations fail.
"The most dangerous errors are the ones you never see—silent failures that leave your systems in inconsistent states without any indication something went wrong."
The ErrorRecord object contains comprehensive information about what went wrong, including the exception type, the command that failed, the target object, and the script location where the error occurred. Understanding this object's structure enables you to create intelligent error handling that responds differently based on error characteristics rather than treating all failures identically.
| Error Type | Behavior | Catchable with Try-Catch | Common Scenarios | Default Action |
|---|---|---|---|---|
| Terminating Error | Stops execution immediately | Yes | File not found, access denied, syntax errors | Halts script |
| Non-Terminating Error | Displays error but continues | No (unless converted) | Failed item in pipeline, network timeout | Continues execution |
| Warning | Informational message | No | Deprecated features, potential issues | Displays and continues |
| Verbose/Debug | Optional detailed information | No | Troubleshooting information | Hidden by default |
The ErrorAction Parameter and Its Impact
Every PowerShell cmdlet supports the ErrorAction common parameter, giving you granular control over how individual commands respond to errors. This parameter accepts several values that fundamentally change error behavior: Stop converts non-terminating errors into terminating ones, Continue displays the error and continues (default behavior), SilentlyContinue suppresses error messages but continues execution, Inquire prompts the user for action, and Ignore completely suppresses errors without adding them to the error stream.
The strategic use of ErrorAction transforms how your scripts handle failures. Setting ErrorAction to Stop on critical operations ensures failures are caught by your try-catch blocks, while using SilentlyContinue on optional operations prevents cluttering output with expected failures. However, SilentlyContinue should be used judiciously—suppressing errors without proper logging creates the dangerous scenario of silent failures.
# Converting non-terminating to terminating error
Get-ChildItem -Path "C:\NonExistent" -ErrorAction Stop
# Suppressing expected errors for optional operations
Get-Process -Name "OptionalService" -ErrorAction SilentlyContinue
# Setting preference for entire script
$ErrorActionPreference = "Stop"
Implementing Try-Catch-Finally Blocks Effectively
The try-catch-finally construct forms the backbone of structured error handling in PowerShell, providing a clean separation between normal execution logic, error handling code, and cleanup operations. The try block contains code that might fail, catch blocks handle specific error types, and the finally block executes regardless of whether errors occurred—perfect for cleanup operations like closing file handles or database connections.
Within the catch block, the automatic variable $_ or $PSItem contains the error record, giving you access to detailed information about what failed. The exception property provides the actual .NET exception object, while properties like TargetObject, CategoryInfo, and InvocationInfo offer context about where and why the failure occurred.
try {
# Potentially failing operation
$content = Get-Content -Path "C:\Important\Data.txt" -ErrorAction Stop
$result = Process-Data -Content $content -ErrorAction Stop
# Additional operations that depend on success
Export-Results -Data $result -Path "C:\Output\Results.csv"
}
catch [System.IO.FileNotFoundException] {
# Specific handling for missing files
Write-Error "Required data file not found: $($_.Exception.Message)"
Send-AlertEmail -Subject "Missing Data File" -Body $_.Exception.Message
}
catch [System.UnauthorizedAccessException] {
# Handle permission issues differently
Write-Error "Access denied: $($_.Exception.Message)"
Write-EventLog -LogName Application -Source "MyScript" -EventId 1001 -EntryType Error -Message $_.Exception.Message
}
catch {
# Generic handler for unexpected errors
Write-Error "Unexpected error occurred: $($_.Exception.Message)"
Write-Error "Error at line $($_.InvocationInfo.ScriptLineNumber)"
throw # Re-throw if you can't handle it
}
finally {
# Cleanup operations that must always run
if ($connection) { $connection.Close() }
if ($fileStream) { $fileStream.Dispose() }
Write-Verbose "Cleanup completed"
}
Multiple Catch Blocks for Specific Error Types
Implementing multiple catch blocks allows your scripts to respond intelligently to different failure scenarios. Rather than treating all errors identically, you can implement retry logic for transient network failures, send alerts for permission issues, and gracefully degrade functionality when optional resources are unavailable. The order of catch blocks matters—PowerShell evaluates them sequentially, so place more specific exception types before generic ones.
"Error handling isn't about preventing failures—it's about controlling what happens when they inevitably occur and ensuring your systems respond appropriately."
Common exception types you'll encounter include System.IO.FileNotFoundException for missing files, System.UnauthorizedAccessException for permission issues, System.Net.WebException for network failures, and System.InvalidOperationException for logic errors. Catching these specifically enables targeted responses rather than generic error messages that provide little actionable information.
Leveraging the Automatic Error Variables
PowerShell maintains several automatic variables that track error information throughout your script's execution. The $Error array contains all errors that have occurred in the current session, with the most recent error at index zero. This collection persists across commands, allowing you to review error history and implement logic that responds to cumulative failure patterns rather than just the immediate error.
The $Error variable proves invaluable for debugging and post-execution analysis. You can examine multiple errors to identify patterns, clear the array when starting critical operations to ensure you're only seeing relevant failures, and implement retry logic that checks whether specific errors have occurred multiple times. Each element in $Error is a full ErrorRecord object containing the same detailed information available in catch blocks.
# Examining the most recent error
$lastError = $Error[0]
Write-Host "Last error message: $($lastError.Exception.Message)"
Write-Host "Failed command: $($lastError.InvocationInfo.MyCommand)"
Write-Host "Line number: $($lastError.InvocationInfo.ScriptLineNumber)"
# Clearing errors before critical section
$Error.Clear()
Invoke-CriticalOperation
if ($Error.Count -gt 0) {
Write-Error "Critical operation failed with $($Error.Count) errors"
foreach ($err in $Error) {
Write-Log -Message $err.Exception.Message -Level Error
}
}
# Checking for specific error types in history
$accessDeniedErrors = $Error | Where-Object { $_.Exception -is [System.UnauthorizedAccessException] }
if ($accessDeniedErrors.Count -gt 3) {
Write-Warning "Multiple permission failures detected - check service account permissions"
}
Understanding ErrorActionPreference
The $ErrorActionPreference variable sets the default error handling behavior for your entire script or session, eliminating the need to specify ErrorAction on every cmdlet. Setting this to "Stop" at the beginning of your script converts all non-terminating errors to terminating ones, making them catchable with try-catch blocks. This approach simplifies error handling but requires careful consideration—some cmdlets legitimately produce non-terminating errors for expected conditions.
Different preference values suit different scenarios: "Stop" works well for critical automation where any failure should halt execution, "Continue" (the default) suits interactive scripts where users need visibility into problems, and "SilentlyContinue" fits scenarios where you're checking for optional resources. You can also temporarily change the preference for specific script sections, then restore it afterward.
Creating Custom Error Messages and Logging
Generic error messages like "operation failed" provide minimal value when troubleshooting production issues at 3 AM. Effective error messages include context about what operation was attempted, what data was being processed, why the failure occurred, and what actions might resolve the issue. This information transforms error handling from a debugging obstacle into a troubleshooting asset.
Structured logging takes error handling beyond console output, creating persistent records that support trend analysis, compliance requirements, and root cause analysis. Whether you're writing to Windows Event Log, text files, or centralized logging systems, consistent log formatting and appropriate severity levels make the difference between useful logs and noise.
"The quality of your error messages directly correlates to how quickly problems get resolved—invest time in making them informative and actionable."
function Write-ErrorLog {
param(
[Parameter(Mandatory)]
[string]$Message,
[Parameter(Mandatory)]
[System.Management.Automation.ErrorRecord]$ErrorRecord,
[string]$LogPath = "C:\Logs\PowerShell\Errors.log"
)
$timestamp = Get-Date -Format "yyyy-MM-dd HH:mm:ss"
$errorDetails = @"
[$timestamp] ERROR: $Message
Exception Type: $($ErrorRecord.Exception.GetType().FullName)
Exception Message: $($ErrorRecord.Exception.Message)
Command: $($ErrorRecord.InvocationInfo.MyCommand)
Line: $($ErrorRecord.InvocationInfo.ScriptLineNumber)
Position: $($ErrorRecord.InvocationInfo.PositionMessage)
Stack Trace: $($ErrorRecord.ScriptStackTrace)
---
"@
Add-Content -Path $LogPath -Value $errorDetails
# Also write to event log for critical errors
if ($ErrorRecord.Exception -is [System.UnauthorizedAccessException] -or
$ErrorRecord.Exception -is [System.IO.IOException]) {
Write-EventLog -LogName Application -Source "PowerShellAutomation" `
-EventId 1000 -EntryType Error -Message $errorDetails
}
}
# Usage in error handling
try {
$data = Import-Csv -Path $InputFile -ErrorAction Stop
Process-Records -Records $data -ErrorAction Stop
}
catch {
$contextMessage = "Failed to process input file '$InputFile' containing $($data.Count) records"
Write-ErrorLog -Message $contextMessage -ErrorRecord $_
# Send notification for critical failures
Send-MailMessage -To "ops@company.com" -Subject "Automation Failure" `
-Body "Processing failed: $contextMessage`n`nError: $($_.Exception.Message)"
throw # Re-throw to halt execution
}
Implementing Retry Logic for Transient Failures
Network operations, database connections, and API calls frequently experience transient failures—temporary conditions that resolve themselves within seconds or minutes. Implementing retry logic with exponential backoff transforms these momentary hiccups from script failures into transparent recoveries. Rather than failing immediately when a remote service is temporarily unavailable, your script waits progressively longer between attempts, giving the service time to recover.
Effective retry logic distinguishes between transient and permanent failures. Network timeouts warrant retries, but authentication failures typically don't. The implementation should include maximum retry counts to prevent infinite loops, exponential backoff to avoid overwhelming recovering services, and logging that tracks retry attempts for troubleshooting.
| Scenario | Should Retry | Recommended Max Retries | Backoff Strategy | Notes |
|---|---|---|---|---|
| Network Timeout | ✅ Yes | 3-5 | Exponential (2, 4, 8 seconds) | Common transient failure |
| Service Unavailable (503) | ✅ Yes | 3-5 | Exponential with jitter | Service may be restarting |
| Authentication Failure | ❌ No | 0 | N/A | Credentials won't fix themselves |
| File Locked | ✅ Yes | 5-10 | Linear (1 second intervals) | Another process may release lock |
| Resource Not Found | ❌ No | 0 | N/A | Missing resources won't appear |
| Rate Limiting (429) | ✅ Yes | 3 | Use Retry-After header | Respect API rate limits |
function Invoke-WithRetry {
param(
[Parameter(Mandatory)]
[scriptblock]$ScriptBlock,
[int]$MaxRetries = 3,
[int]$InitialDelaySeconds = 2,
[string[]]$RetryableExceptions = @(
'System.Net.WebException',
'System.IO.IOException',
'System.TimeoutException'
)
)
$attempt = 0
$delay = $InitialDelaySeconds
while ($attempt -le $MaxRetries) {
try {
$attempt++
Write-Verbose "Attempt $attempt of $($MaxRetries + 1)"
# Execute the script block
$result = & $ScriptBlock
Write-Verbose "Operation succeeded on attempt $attempt"
return $result
}
catch {
$exceptionType = $_.Exception.GetType().FullName
$isRetryable = $RetryableExceptions -contains $exceptionType
if (-not $isRetryable -or $attempt -gt $MaxRetries) {
Write-Error "Operation failed after $attempt attempts: $($_.Exception.Message)"
throw
}
Write-Warning "Attempt $attempt failed: $($_.Exception.Message)"
Write-Verbose "Waiting $delay seconds before retry..."
Start-Sleep -Seconds $delay
# Exponential backoff with jitter
$delay = $delay * 2 + (Get-Random -Minimum 0 -Maximum 2)
}
}
}
# Usage example
$webData = Invoke-WithRetry -MaxRetries 5 -ScriptBlock {
Invoke-RestMethod -Uri "https://api.example.com/data" -TimeoutSec 30 -ErrorAction Stop
}
Implementing Circuit Breaker Pattern
The circuit breaker pattern prevents your script from repeatedly attempting operations that are likely to fail, protecting both your script and the remote service from unnecessary load. After a threshold of consecutive failures, the circuit "opens," immediately failing requests without attempting them for a cooldown period. This pattern proves essential when calling external APIs or services that might experience extended outages.
"Retry logic without circuit breakers is like repeatedly trying to open a locked door—eventually you need to acknowledge it's not going to open and try something else."
Handling Pipeline Errors Appropriately
Pipeline operations in PowerShell present unique error handling challenges because errors can occur at any stage of the pipeline, and by default, they don't terminate the entire pipeline. When processing collections of objects through multiple cmdlets, you need strategies that handle individual item failures without losing the entire batch, while still maintaining visibility into what succeeded and what failed.
The ForEach-Object cmdlet with try-catch blocks inside provides granular control over pipeline error handling. This approach allows you to implement per-item error handling, collect failed items for retry or reporting, and continue processing remaining items even when some fail. Alternatively, the -ErrorVariable parameter captures errors without stopping execution, enabling post-pipeline analysis of what went wrong.
# Handling errors for individual pipeline items
$results = Get-ChildItem -Path "C:\DataFiles\*.csv" | ForEach-Object {
try {
$data = Import-Csv -Path $_.FullName -ErrorAction Stop
$processed = $data | ForEach-Object {
Process-Record -Record $_ -ErrorAction Stop
}
[PSCustomObject]@{
File = $_.Name
Status = "Success"
RecordsProcessed = $processed.Count
Error = $null
}
}
catch {
Write-Warning "Failed to process $($_.Name): $($_.Exception.Message)"
[PSCustomObject]@{
File = $_.Name
Status = "Failed"
RecordsProcessed = 0
Error = $_.Exception.Message
}
}
}
# Report on results
$successCount = ($results | Where-Object Status -eq "Success").Count
$failCount = ($results | Where-Object Status -eq "Failed").Count
Write-Host "Processing complete: $successCount succeeded, $failCount failed"
# Export failed items for review
$results | Where-Object Status -eq "Failed" |
Export-Csv -Path "C:\Logs\FailedFiles.csv" -NoTypeInformation
Using ErrorVariable for Non-Terminating Error Collection
The ErrorVariable common parameter provides an elegant solution for collecting non-terminating errors without converting them to terminating errors. By specifying a variable name (without the dollar sign), PowerShell populates that variable with any errors that occur during cmdlet execution. This technique works particularly well for operations where you want to attempt all items but need to know which ones failed.
# Collect errors without stopping execution
$servers = @("Server01", "Server02", "Server03", "NonExistent")
$services = $servers | ForEach-Object {
Get-Service -ComputerName $_ -Name "Spooler" -ErrorAction SilentlyContinue -ErrorVariable +serviceErrors
}
# Check if any errors occurred
if ($serviceErrors) {
Write-Warning "Failed to query $($serviceErrors.Count) servers:"
foreach ($err in $serviceErrors) {
Write-Warning " $($err.TargetObject): $($err.Exception.Message)"
}
}
# Process successful results
$services | Where-Object { $_ } | ForEach-Object {
Write-Host "$($_.MachineName): $($_.Status)"
}
Validating Input and Preventing Errors Proactively
The most elegant error handling is the error that never occurs. Input validation transforms potential runtime errors into clear, early failures with helpful messages about what's wrong and how to fix it. PowerShell's parameter validation attributes provide declarative validation that executes before your function code runs, catching invalid input at the boundary rather than deep within processing logic.
Validation attributes like ValidateNotNullOrEmpty, ValidateSet, ValidateRange, ValidatePattern, and ValidateScript enable you to specify requirements directly in parameter declarations. When validation fails, PowerShell automatically generates informative error messages that include the parameter name and validation requirement, making it immediately clear what needs to be corrected.
function Process-DataFile {
[CmdletBinding()]
param(
[Parameter(Mandatory)]
[ValidateNotNullOrEmpty()]
[ValidateScript({
if (-not (Test-Path $_)) {
throw "File '$_' does not exist"
}
if ((Get-Item $_).Extension -ne '.csv') {
throw "File must be a CSV file"
}
return $true
})]
[string]$InputFile,
[Parameter(Mandatory)]
[ValidateSet('Production', 'Staging', 'Development')]
[string]$Environment,
[ValidateRange(1, 100)]
[int]$BatchSize = 10,
[ValidatePattern('^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$')]
[string]$NotificationEmail
)
# Additional runtime validation
$fileSize = (Get-Item $InputFile).Length
if ($fileSize -gt 100MB) {
throw "Input file exceeds maximum size of 100MB (actual: $([math]::Round($fileSize/1MB, 2))MB)"
}
# Validate environment-specific prerequisites
switch ($Environment) {
'Production' {
if (-not (Test-Path "\\ProdServer\Share")) {
throw "Production share is not accessible"
}
}
'Staging' {
if (-not (Test-Connection -ComputerName "StagingDB" -Count 1 -Quiet)) {
throw "Staging database server is unreachable"
}
}
}
# Processing logic here...
}
"An ounce of validation is worth a pound of error handling—catch problems at the boundary before they cascade through your system."
Implementing Defensive Programming Practices
Defensive programming assumes that things will go wrong and structures code to handle unexpected conditions gracefully. This includes checking return values before using them, validating assumptions about data structure, and implementing null-checking patterns. Rather than assuming a cmdlet succeeded because it didn't throw an error, explicitly verify that the expected result was produced.
function Get-UserData {
param([string]$Username)
# Defensive null checking
if ([string]::IsNullOrWhiteSpace($Username)) {
throw "Username cannot be null or empty"
}
# Attempt to get user
$user = Get-ADUser -Identity $Username -ErrorAction SilentlyContinue
# Verify we got a result
if (-not $user) {
throw "User '$Username' not found in Active Directory"
}
# Verify expected properties exist
if (-not $user.EmailAddress) {
Write-Warning "User '$Username' has no email address configured"
}
return $user
}
# Safe property access with null checking
function Get-ManagerEmail {
param($User)
if (-not $User) {
Write-Warning "No user provided"
return $null
}
if (-not $User.Manager) {
Write-Verbose "User has no manager assigned"
return $null
}
$manager = Get-ADUser -Identity $User.Manager -Properties EmailAddress -ErrorAction SilentlyContinue
return $manager?.EmailAddress # Safe navigation operator (PowerShell 7+)
}
Testing Error Handling Logic
Error handling code that's never tested is error handling code that doesn't work. Pester, PowerShell's testing framework, enables you to verify that your error handling behaves correctly by deliberately triggering error conditions and asserting that your code responds appropriately. This practice ensures that when real errors occur in production, your carefully crafted error handling actually executes as intended.
Testing error scenarios requires using mocks to simulate failures without depending on actual error conditions. You can mock cmdlets to throw specific exceptions, return null values, or produce invalid data, then verify that your function catches these errors, logs them appropriately, and returns expected error indicators. This approach validates both the happy path and various failure scenarios.
Describe "Process-DataFile Error Handling" {
Context "When input file does not exist" {
It "Should throw a meaningful error" {
{ Process-DataFile -InputFile "C:\NonExistent.csv" -Environment "Development" } |
Should -Throw "*does not exist*"
}
}
Context "When file is not a CSV" {
It "Should reject non-CSV files" {
New-Item -Path "TestDrive:\test.txt" -ItemType File
{ Process-DataFile -InputFile "TestDrive:\test.txt" -Environment "Development" } |
Should -Throw "*must be a CSV file*"
}
}
Context "When processing fails" {
BeforeEach {
New-Item -Path "TestDrive:\valid.csv" -ItemType File -Value "Header1,Header2`nValue1,Value2"
# Mock the processing cmdlet to fail
Mock Process-Record { throw "Processing error" }
}
It "Should log the error" {
Mock Write-ErrorLog { }
try {
Process-DataFile -InputFile "TestDrive:\valid.csv" -Environment "Development"
}
catch { }
Assert-MockCalled Write-ErrorLog -Exactly 1
}
It "Should return failed status" {
$result = Process-DataFile -InputFile "TestDrive:\valid.csv" -Environment "Development"
$result.Status | Should -Be "Failed"
}
}
Context "When network operation times out" {
It "Should retry the operation" {
Mock Invoke-RestMethod { throw [System.TimeoutException]::new("Request timed out") }
{ Invoke-WithRetry -ScriptBlock { Invoke-RestMethod -Uri "http://example.com" } } |
Should -Throw
Assert-MockCalled Invoke-RestMethod -Exactly 4 # Initial attempt + 3 retries
}
}
}
Advanced Error Handling Patterns
Complex automation scenarios demand sophisticated error handling patterns that go beyond basic try-catch blocks. The trap statement provides function-level error handling that catches errors anywhere within a function scope, useful for establishing consistent error handling across large functions without wrapping every operation in try-catch. Trap handlers can choose to continue execution or terminate, giving you control over error recovery at a broader scope.
function Process-LargeDataset {
[CmdletBinding()]
param([string]$DataPath)
# Function-level error handler
trap {
Write-Error "Critical error in Process-LargeDataset: $_"
Write-ErrorLog -Message "Dataset processing failed" -ErrorRecord $_
# Cleanup resources
if ($connection) { $connection.Close() }
# Continue or terminate
continue # Use 'break' to terminate function
}
# Multiple operations that might fail
$data = Import-Csv -Path $DataPath
$connection = Connect-Database -Server "DataServer"
$results = Process-Records -Records $data -Connection $connection
Export-Results -Results $results
$connection.Close()
}
Implementing Error Aggregation for Batch Operations
When processing large batches of items, collecting all errors for summary reporting provides better operational visibility than logging each error individually. This pattern accumulates errors during processing, then generates a comprehensive report showing success rates, common failure patterns, and detailed information about each failure. This approach proves invaluable for scheduled jobs that process hundreds or thousands of items.
"Batch operations require batch thinking—collect errors, analyze patterns, and provide actionable summaries rather than overwhelming operators with individual failure messages."
function Process-UserBatch {
param([string[]]$Usernames)
$results = @{
Successful = @()
Failed = @()
Warnings = @()
}
foreach ($username in $Usernames) {
try {
$user = Get-ADUser -Identity $username -ErrorAction Stop
# Check for potential issues
if (-not $user.EmailAddress) {
$results.Warnings += [PSCustomObject]@{
Username = $username
Issue = "No email address configured"
}
}
# Perform operations
Update-UserAttributes -User $user -ErrorAction Stop
$results.Successful += $username
}
catch {
$results.Failed += [PSCustomObject]@{
Username = $username
Error = $_.Exception.Message
ErrorType = $_.Exception.GetType().Name
Timestamp = Get-Date
}
}
}
# Generate summary report
$summary = @"
Batch Processing Summary
========================
Total Users: $($Usernames.Count)
Successful: $($results.Successful.Count)
Failed: $($results.Failed.Count)
Warnings: $($results.Warnings.Count)
Success Rate: $([math]::Round(($results.Successful.Count / $Usernames.Count) * 100, 2))%
"@
Write-Host $summary
# Export detailed failure information
if ($results.Failed.Count -gt 0) {
$results.Failed | Export-Csv -Path "C:\Logs\BatchFailures_$(Get-Date -Format 'yyyyMMdd_HHmmss').csv" -NoTypeInformation
# Analyze error patterns
$errorGroups = $results.Failed | Group-Object -Property ErrorType
Write-Host "`nError Breakdown:"
foreach ($group in $errorGroups) {
Write-Host " $($group.Name): $($group.Count) occurrences"
}
}
return $results
}
Error Handling in Scheduled and Unattended Scripts
Scripts running as scheduled tasks or background jobs require self-sufficient error handling because there's no user present to respond to prompts or read console output. These scripts must log comprehensively, send notifications for critical failures, and implement self-healing mechanisms where possible. The absence of interactivity demands that error handling be both robust and autonomous.
Notification strategies for unattended scripts should be severity-aware: critical failures warrant immediate alerts via email or messaging systems, while minor issues can accumulate into daily summary reports. Implementing notification throttling prevents alert fatigue when a single underlying issue causes multiple errors. Additionally, unattended scripts should implement timeout mechanisms to prevent hanging indefinitely when operations stall.
function Invoke-ScheduledProcessing {
[CmdletBinding()]
param(
[string]$ConfigPath = "C:\Scripts\Config.json",
[string]$LogPath = "C:\Logs\ScheduledJob.log"
)
# Initialize logging
$startTime = Get-Date
$logEntry = @{
Timestamp = $startTime
Status = "Started"
Errors = @()
Warnings = @()
}
try {
# Validate prerequisites
if (-not (Test-Path $ConfigPath)) {
throw "Configuration file not found: $ConfigPath"
}
$config = Get-Content $ConfigPath | ConvertFrom-Json
# Set timeout for entire operation
$job = Start-Job -ScriptBlock {
param($cfg)
# Processing logic here
Process-Data -Config $cfg
} -ArgumentList $config
# Wait with timeout
$completed = Wait-Job -Job $job -Timeout 3600 # 1 hour timeout
if (-not $completed) {
Stop-Job -Job $job
throw "Processing exceeded timeout of 1 hour"
}
$result = Receive-Job -Job $job
Remove-Job -Job $job
$logEntry.Status = "Completed"
$logEntry.Result = $result
}
catch {
$logEntry.Status = "Failed"
$logEntry.Errors += $_.Exception.Message
# Log to file
Add-Content -Path $LogPath -Value ($logEntry | ConvertTo-Json)
# Send critical alert
$emailParams = @{
To = "ops-team@company.com"
From = "automation@company.com"
Subject = "CRITICAL: Scheduled Job Failed - $(Get-Date -Format 'yyyy-MM-dd HH:mm')"
Body = @"
The scheduled processing job has failed.
Error: $($_.Exception.Message)
Time: $(Get-Date)
Server: $env:COMPUTERNAME
Script: $PSCommandPath
Please investigate immediately.
"@
SmtpServer = "smtp.company.com"
}
try {
Send-MailMessage @emailParams -ErrorAction Stop
}
catch {
# If email fails, write to event log as last resort
Write-EventLog -LogName Application -Source "PowerShellAutomation" `
-EventId 1002 -EntryType Error `
-Message "Failed to send alert email: $($_.Exception.Message)"
}
# Exit with error code for task scheduler
exit 1
}
finally {
$duration = (Get-Date) - $startTime
$logEntry.Duration = $duration.TotalSeconds
# Write final log entry
Add-Content -Path $LogPath -Value ($logEntry | ConvertTo-Json)
Write-Verbose "Processing completed in $($duration.TotalSeconds) seconds with status: $($logEntry.Status)"
}
}
Implementing Health Checks and Self-Healing
Proactive error handling includes health checks that verify prerequisites before beginning processing, potentially avoiding errors entirely. Self-healing mechanisms can automatically remediate common issues: restarting stopped services, clearing temporary files when disk space is low, or reconnecting dropped network connections. These patterns transform reactive error handling into proactive system maintenance.
function Test-Prerequisites {
[CmdletBinding()]
param([hashtable]$Requirements)
$issues = @()
# Check disk space
if ($Requirements.MinimumDiskSpaceGB) {
$drive = Get-PSDrive -Name C
$freeSpaceGB = [math]::Round($drive.Free / 1GB, 2)
if ($freeSpaceGB -lt $Requirements.MinimumDiskSpaceGB) {
$issues += "Insufficient disk space: ${freeSpaceGB}GB available, $($Requirements.MinimumDiskSpaceGB)GB required"
# Attempt self-healing: clear temp files
try {
Remove-Item -Path "$env:TEMP\*" -Recurse -Force -ErrorAction SilentlyContinue
Write-Verbose "Cleared temporary files to free disk space"
}
catch {
Write-Warning "Could not clear temporary files: $($_.Exception.Message)"
}
}
}
# Check required services
if ($Requirements.RequiredServices) {
foreach ($serviceName in $Requirements.RequiredServices) {
$service = Get-Service -Name $serviceName -ErrorAction SilentlyContinue
if (-not $service) {
$issues += "Required service not found: $serviceName"
}
elseif ($service.Status -ne 'Running') {
$issues += "Required service not running: $serviceName"
# Attempt self-healing: start the service
try {
Start-Service -Name $serviceName -ErrorAction Stop
Write-Verbose "Started service: $serviceName"
Start-Sleep -Seconds 5
}
catch {
Write-Warning "Could not start service ${serviceName}: $($_.Exception.Message)"
}
}
}
}
# Check network connectivity
if ($Requirements.RequiredHosts) {
foreach ($host in $Requirements.RequiredHosts) {
if (-not (Test-Connection -ComputerName $host -Count 1 -Quiet)) {
$issues += "Cannot reach required host: $host"
}
}
}
if ($issues.Count -gt 0) {
throw "Prerequisites not met:`n" + ($issues -join "`n")
}
Write-Verbose "All prerequisites verified successfully"
}
Documentation and Error Handling Standards
Establishing consistent error handling patterns across your PowerShell codebase improves maintainability and reduces cognitive load when troubleshooting. Documentation should specify which errors are expected and handled versus which indicate genuine problems, what retry logic applies to different operation types, and what logging conventions are followed. This standardization enables team members to quickly understand error handling behavior without reading every line of code.
Comment-based help for functions should document error conditions, including what exceptions might be thrown and under what circumstances. This information helps consumers of your functions implement appropriate error handling in their own code. Additionally, maintaining a centralized error handling module with reusable functions promotes consistency and reduces code duplication.
<#
.SYNOPSIS
Processes customer data files with comprehensive error handling.
.DESCRIPTION
This function imports customer data from CSV files, validates the data,
and performs processing operations. It implements retry logic for transient
failures and comprehensive error logging.
.PARAMETER InputPath
Path to the input CSV file. Must exist and be readable.
.PARAMETER OutputPath
Path where processed results will be written. Directory must exist.
.EXAMPLE
Process-CustomerData -InputPath "C:\Data\customers.csv" -OutputPath "C:\Results"
.NOTES
Error Handling:
- File I/O errors: Logged and thrown (terminating)
- Validation errors: Collected and reported in summary (non-terminating)
- Network errors: Retried up to 3 times with exponential backoff
Logging:
- All errors written to: C:\Logs\CustomerProcessing.log
- Critical errors also written to Windows Event Log
Prerequisites:
- Minimum 1GB free disk space
- Network connectivity to processing service
- Required PowerShell modules: ImportExcel, SqlServer
.LINK
https://internal.wiki/powershell-error-handling-standards
#>
function Process-CustomerData {
[CmdletBinding()]
param(
[Parameter(Mandatory)]
[ValidateScript({Test-Path $_})]
[string]$InputPath,
[Parameter(Mandatory)]
[ValidateScript({Test-Path (Split-Path $_)})]
[string]$OutputPath
)
# Implementation with documented error handling...
}
What is the difference between terminating and non-terminating errors in PowerShell?
Terminating errors immediately stop script execution and transfer control to error handling blocks like catch statements, while non-terminating errors display an error message but allow the script to continue processing subsequent commands. Terminating errors can be caught with try-catch blocks, whereas non-terminating errors require the ErrorVariable parameter or conversion to terminating errors using ErrorAction Stop to be caught programmatically.
How do I make non-terminating errors catchable with try-catch blocks?
Add the -ErrorAction Stop parameter to the cmdlet that might produce non-terminating errors, or set $ErrorActionPreference = "Stop" at the script level. This converts non-terminating errors into terminating errors that will trigger catch blocks. You can also use the -ErrorVariable parameter to collect non-terminating errors in a variable for inspection after execution.
When should I implement retry logic in error handling?
Implement retry logic for transient failures that are likely to resolve themselves, such as network timeouts, temporary service unavailability, or resource locking. Do not retry for permanent failures like authentication errors, missing resources, or invalid input data. Use exponential backoff between retries to avoid overwhelming recovering services, and implement maximum retry counts to prevent infinite loops.
What information should I include in error log messages?
Effective error logs should include the timestamp, error message, exception type, the command or operation that failed, line number and script location, relevant context like what data was being processed, and the stack trace for debugging. For production systems, also log the server name, user account under which the script is running, and any relevant configuration values that might affect behavior.
How do I handle errors in pipeline operations without stopping the entire pipeline?
Use ForEach-Object with try-catch blocks inside to handle errors for individual pipeline items, allowing you to implement per-item error handling while continuing to process remaining items. Alternatively, use the -ErrorAction SilentlyContinue parameter combined with -ErrorVariable to collect errors without stopping execution, then analyze the error collection after pipeline completion. This approach enables you to separate successful items from failed ones and generate summary reports.
What are best practices for error handling in scheduled scripts?
Scheduled scripts should implement comprehensive logging to files or centralized logging systems, send notifications for critical failures via email or messaging systems, implement timeout mechanisms to prevent hanging indefinitely, include health checks before beginning processing, and exit with appropriate error codes that task schedulers can interpret. Avoid any interactive prompts or console-only output since no user is present to respond or view them.