Writing Functions and Parameters in PowerShell
PowerShell function diagram: param block with typed, default and mandatory params; validation attributes; function body and return values; examples of invocation and help comments.
Sponsor message — This article is made possible by Dargslan.com, a publisher of practical, no-fluff IT & developer workbooks.
Why Dargslan.com?
If you prefer doing over endless theory, Dargslan’s titles are built for you. Every workbook focuses on skills you can apply the same day—server hardening, Linux one-liners, PowerShell for admins, Python automation, cloud basics, and more.
PowerShell scripting transforms repetitive IT tasks into efficient, automated workflows, and at the heart of this transformation lies the ability to write custom functions with well-designed parameters. Whether you're managing hundreds of servers, automating deployment pipelines, or simply trying to eliminate the tedium of daily administrative tasks, understanding how to craft reusable functions is the difference between spending hours on manual work and executing complex operations with a single command. This skill separates those who merely use PowerShell from those who truly harness its power.
Functions in PowerShell are self-contained blocks of code designed to perform specific tasks, while parameters serve as the gateway through which you pass information to these functions. Together, they create flexible, maintainable scripts that can adapt to different scenarios without requiring code modifications. This article explores multiple perspectives on function creation—from basic syntax to advanced parameter validation, from simple helper scripts to enterprise-grade modules.
Throughout this comprehensive guide, you'll discover practical techniques for building robust PowerShell functions, learn how to implement various parameter types and validation methods, understand the cmdlet binding attributes that make your functions behave like native PowerShell commands, and gain insights into best practices that professional scripters use daily. You'll find detailed examples, comparative tables, and actionable strategies that you can immediately apply to your own scripting projects.
Understanding Function Fundamentals
Creating a function in PowerShell begins with the function keyword followed by a name that describes what the function does. The naming convention follows the Verb-Noun pattern, mirroring PowerShell's built-in cmdlets. This consistency makes your custom functions feel native to the PowerShell environment and helps other administrators immediately understand their purpose.
A basic function structure includes the function declaration, an optional parameter block, and the code that performs the actual work. The simplest functions require no parameters and execute the same way every time they're called. However, the real power emerges when you add parameters that allow the function to adapt its behavior based on input values.
function Get-SystemInformation {
$os = Get-CimInstance -ClassName Win32_OperatingSystem
$computer = Get-CimInstance -ClassName Win32_ComputerSystem
[PSCustomObject]@{
ComputerName = $computer.Name
OperatingSystem = $os.Caption
Version = $os.Version
LastBootTime = $os.LastBootUpTime
TotalMemoryGB = [math]::Round($computer.TotalPhysicalMemory / 1GB, 2)
}
}This basic function retrieves system information without requiring any input. It demonstrates the fundamental structure: a descriptive name, code that performs a specific task, and output formatted as a custom object. While functional, this approach lacks flexibility because it only works on the local computer.
"The transition from writing scripts to writing functions represents a fundamental shift in how you approach PowerShell automation. Functions force you to think about reusability, modularity, and the interfaces through which your code interacts with the world."
Implementing Parameters Effectively
Parameters transform rigid functions into flexible tools. The param block defines what information your function accepts and how it should be processed. Placed at the beginning of your function body, this block declares each parameter with its data type, default values, and validation rules.
When defining parameters, you specify the data type in square brackets before the parameter name. This type constraint ensures that PowerShell validates input before your function code executes, preventing runtime errors from invalid data. Common types include [string], [int], [datetime], [array], and [hashtable], though you can use any .NET type.
function Get-RemoteSystemInformation {
param(
[string]$ComputerName = $env:COMPUTERNAME,
[PSCredential]$Credential,
[switch]$IncludeDiskInfo
)
$sessionParams = @{
ComputerName = $ComputerName
}
if ($Credential) {
$sessionParams.Credential = $Credential
}
$os = Get-CimInstance -ClassName Win32_OperatingSystem @sessionParams
$computer = Get-CimInstance -ClassName Win32_ComputerSystem @sessionParams
$result = [PSCustomObject]@{
ComputerName = $computer.Name
OperatingSystem = $os.Caption
Version = $os.Version
LastBootTime = $os.LastBootUpTime
TotalMemoryGB = [math]::Round($computer.TotalPhysicalMemory / 1GB, 2)
}
if ($IncludeDiskInfo) {
$disks = Get-CimInstance -ClassName Win32_LogicalDisk -Filter "DriveType=3" @sessionParams
$result | Add-Member -MemberType NoteProperty -Name Disks -Value $disks
}
return $result
}This enhanced version demonstrates several parameter concepts. The ComputerName parameter has a default value, making it optional. The Credential parameter uses the PSCredential type for secure authentication. The IncludeDiskInfo parameter is a switch, which means it's either present or absent—no value needed.
Parameter Attributes and Validation
PowerShell provides attribute decorators that add sophisticated validation and behavior to parameters. These attributes appear in square brackets above the parameter declaration and control how PowerShell processes the input before your function code runs.
- [Parameter(Mandatory=$true)] - Forces the user to provide a value; PowerShell prompts if the parameter is missing
- [ValidateNotNullOrEmpty()] - Ensures the parameter receives an actual value, not null or empty strings
- [ValidateSet()] - Restricts input to a predefined list of acceptable values
- [ValidateRange()] - Limits numeric parameters to a specific range
- [ValidatePattern()] - Requires input to match a regular expression pattern
- [ValidateScript()] - Runs custom validation logic against the parameter value
function New-UserAccount {
param(
[Parameter(Mandatory=$true)]
[ValidateNotNullOrEmpty()]
[string]$Username,
[Parameter(Mandatory=$true)]
[ValidatePattern('^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$')]
[string]$EmailAddress,
[ValidateSet('IT', 'HR', 'Finance', 'Operations')]
[string]$Department = 'Operations',
[ValidateRange(1, 100)]
[int]$MailboxSizeGB = 50,
[ValidateScript({Test-Path $_ -PathType Container})]
[string]$HomeDirectory
)
Write-Verbose "Creating user account: $Username"
Write-Verbose "Department: $Department"
Write-Verbose "Mailbox size: $MailboxSizeGB GB"
# Account creation logic here
}"Parameter validation isn't just about preventing errors—it's about creating a conversation between your function and its users. Good validation provides clear feedback about what went wrong and guides users toward correct usage."
Advanced Parameter Techniques
Beyond basic validation, PowerShell offers advanced parameter features that make your functions behave like professional cmdlets. The [CmdletBinding()] attribute transforms a simple function into an advanced function, unlocking features like verbose output, whatif support, and automatic parameter handling.
When you add [CmdletBinding()] above your param block, your function gains access to common parameters that users expect from native cmdlets. These include -Verbose, -Debug, -ErrorAction, -WarningAction, and others. You don't need to declare these parameters—they're automatically available.
| CmdletBinding Parameter | Purpose | Usage Example |
|---|---|---|
| SupportsShouldProcess | Enables -WhatIf and -Confirm parameters for testing changes before execution | [CmdletBinding(SupportsShouldProcess=$true)] |
| DefaultParameterSetName | Specifies which parameter set to use when ambiguity exists | [CmdletBinding(DefaultParameterSetName='ByName')] |
| PositionalBinding | Controls whether parameters can be specified by position | [CmdletBinding(PositionalBinding=$false)] |
| ConfirmImpact | Sets the risk level for operations requiring confirmation | [CmdletBinding(ConfirmImpact='High')] |
Parameter Sets for Multiple Usage Patterns
Parameter sets allow a single function to accept different combinations of parameters for different scenarios. This technique prevents invalid parameter combinations while maintaining a clean, single function interface. Each parameter can belong to one or more sets, and PowerShell ensures only compatible parameters are used together.
function Get-UserInformation {
[CmdletBinding(DefaultParameterSetName='ByUsername')]
param(
[Parameter(Mandatory=$true,
ParameterSetName='ByUsername',
Position=0)]
[string]$Username,
[Parameter(Mandatory=$true,
ParameterSetName='ByEmail')]
[string]$EmailAddress,
[Parameter(Mandatory=$true,
ParameterSetName='ByEmployeeID')]
[int]$EmployeeID,
[Parameter(ParameterSetName='ByUsername')]
[Parameter(ParameterSetName='ByEmail')]
[Parameter(ParameterSetName='ByEmployeeID')]
[switch]$IncludeGroups
)
switch ($PSCmdlet.ParameterSetName) {
'ByUsername' {
Write-Verbose "Searching by username: $Username"
$user = Get-ADUser -Identity $Username
}
'ByEmail' {
Write-Verbose "Searching by email: $EmailAddress"
$user = Get-ADUser -Filter "EmailAddress -eq '$EmailAddress'"
}
'ByEmployeeID' {
Write-Verbose "Searching by employee ID: $EmployeeID"
$user = Get-ADUser -Filter "EmployeeID -eq '$EmployeeID'"
}
}
if ($IncludeGroups) {
$groups = Get-ADPrincipalGroupMembership -Identity $user
$user | Add-Member -MemberType NoteProperty -Name Groups -Value $groups
}
return $user
}This function demonstrates three distinct ways to search for a user account, each requiring different parameters. The IncludeGroups switch works with any parameter set, showing how parameters can span multiple sets. The $PSCmdlet.ParameterSetName variable tells you which set the user invoked.
Pipeline Input and ValueFromPipeline
PowerShell's pipeline is one of its most powerful features, allowing output from one command to flow directly into another. Making your functions pipeline-aware requires specific parameter attributes and processing blocks that handle input streams efficiently.
The ValueFromPipeline attribute tells PowerShell that a parameter should accept input from the pipeline. When combined with Begin, Process, and End blocks, your function can handle both single objects and collections passed through the pipeline.
function Test-ServerConnectivity {
[CmdletBinding()]
param(
[Parameter(Mandatory=$true,
ValueFromPipeline=$true,
ValueFromPipelineByPropertyName=$true)]
[Alias('Name', 'Server', 'Computer')]
[string[]]$ComputerName,
[ValidateRange(1, 10)]
[int]$Count = 2,
[ValidateRange(100, 5000)]
[int]$TimeoutMs = 1000
)
Begin {
Write-Verbose "Starting connectivity tests"
$results = @()
}
Process {
foreach ($computer in $ComputerName) {
Write-Verbose "Testing connection to $computer"
$pingResult = Test-Connection -ComputerName $computer -Count $Count -TimeoutSeconds ($TimeoutMs/1000) -Quiet
$results += [PSCustomObject]@{
ComputerName = $computer
IsOnline = $pingResult
TestedAt = Get-Date
PingCount = $Count
}
}
}
End {
Write-Verbose "Completed testing $($results.Count) computers"
return $results
}
}The Begin block executes once before processing any pipeline input, perfect for initialization. The Process block runs once for each object coming through the pipeline, making it ideal for the main work. The End block executes once after all pipeline input is processed, useful for cleanup or final output.
"Pipeline-aware functions represent the pinnacle of PowerShell design. They integrate seamlessly with other commands, process data efficiently, and feel natural to experienced PowerShell users who think in terms of data streams rather than individual operations."
Working with Dynamic Parameters
Dynamic parameters appear or disappear based on the values of other parameters or runtime conditions. This advanced technique creates intelligent functions that adapt their interface to the context, showing only relevant options and hiding unnecessary complexity.
Implementing dynamic parameters requires creating a DynamicParam block that returns a RuntimeDefinedParameterDictionary. While more complex than standard parameters, dynamic parameters provide unmatched flexibility for scenarios where the available options depend on previous choices.
function Get-ServiceConfiguration {
[CmdletBinding()]
param(
[Parameter(Mandatory=$true)]
[ValidateSet('Windows', 'Linux')]
[string]$OperatingSystem
)
DynamicParam {
$paramDictionary = New-Object System.Management.Automation.RuntimeDefinedParameterDictionary
if ($OperatingSystem -eq 'Windows') {
$serviceParam = New-Object System.Management.Automation.ParameterAttribute
$serviceParam.Mandatory = $true
$serviceCollection = New-Object System.Collections.ObjectModel.Collection[System.Attribute]
$serviceCollection.Add($serviceParam)
$serviceNames = (Get-Service).Name
$serviceValidation = New-Object System.Management.Automation.ValidateSetAttribute($serviceNames)
$serviceCollection.Add($serviceValidation)
$serviceRuntimeParam = New-Object System.Management.Automation.RuntimeDefinedParameter('ServiceName', [string], $serviceCollection)
$paramDictionary.Add('ServiceName', $serviceRuntimeParam)
}
if ($OperatingSystem -eq 'Linux') {
$daemonParam = New-Object System.Management.Automation.ParameterAttribute
$daemonParam.Mandatory = $true
$daemonCollection = New-Object System.Collections.ObjectModel.Collection[System.Attribute]
$daemonCollection.Add($daemonParam)
$daemonRuntimeParam = New-Object System.Management.Automation.RuntimeDefinedParameter('DaemonName', [string], $daemonCollection)
$paramDictionary.Add('DaemonName', $daemonRuntimeParam)
}
return $paramDictionary
}
Process {
if ($OperatingSystem -eq 'Windows') {
$serviceName = $PSBoundParameters['ServiceName']
Get-Service -Name $serviceName | Select-Object Name, Status, StartType, DisplayName
}
else {
$daemonName = $PSBoundParameters['DaemonName']
Write-Output "Would retrieve daemon information for: $daemonName"
}
}
}This function shows different parameters based on the operating system selection. For Windows, it displays a ServiceName parameter validated against actual services. For Linux, it shows a DaemonName parameter. The $PSBoundParameters automatic variable provides access to dynamic parameter values.
Output and Return Strategies
How your function returns data significantly impacts its usability and integration with other PowerShell commands. PowerShell functions can output data in several ways, each with different implications for pipeline processing and result handling.
The most natural approach is to simply output objects without using the return keyword. Any object that isn't captured by a variable or redirected automatically becomes output. This method works seamlessly with the pipeline and allows your function to output multiple objects throughout its execution.
| Output Method | Best Used When | Pipeline Behavior | Considerations |
|---|---|---|---|
| Implicit Output | Streaming multiple objects or working with pipelines | Each object flows immediately to the next command | Most PowerShell-native approach; automatic and efficient |
| Return Statement | Exiting early or returning a single result | Outputs the value then exits the function | Familiar to programmers from other languages |
| Write-Output | Explicitly sending objects to the success stream | Identical to implicit output | Makes intent clear but usually unnecessary |
| Custom Objects | Returning structured data with specific properties | Creates consistent, predictable output format | Enables property-based filtering and formatting |
Creating Structured Output with Custom Objects
Custom objects created with [PSCustomObject] provide the most professional output format. They define a consistent structure with named properties, making your function's output predictable and easy to work with in the pipeline.
function Get-DirectoryStatistics {
[CmdletBinding()]
param(
[Parameter(Mandatory=$true)]
[ValidateScript({Test-Path $_ -PathType Container})]
[string]$Path,
[switch]$Recurse
)
$getChildItemParams = @{
Path = $Path
File = $true
}
if ($Recurse) {
$getChildItemParams.Recurse = $true
}
$files = Get-ChildItem @getChildItemParams
$stats = $files | Group-Object Extension | ForEach-Object {
$totalSize = ($_.Group | Measure-Object Length -Sum).Sum
[PSCustomObject]@{
Extension = if ($_.Name) { $_.Name } else { '(no extension)' }
FileCount = $_.Count
TotalSizeBytes = $totalSize
TotalSizeMB = [math]::Round($totalSize / 1MB, 2)
AverageSizeKB = [math]::Round(($totalSize / $_.Count) / 1KB, 2)
LargestFile = ($_.Group | Sort-Object Length -Descending | Select-Object -First 1).Name
}
}
$stats | Sort-Object TotalSizeBytes -Descending
}This function demonstrates professional output formatting. Each result is a custom object with clearly named properties. The output can be easily filtered, sorted, formatted, or exported because it follows PowerShell conventions.
"The quality of a function's output determines its reusability. Functions that return well-structured custom objects integrate seamlessly into complex pipelines, while those that return raw text or inconsistent data become isolated tools that resist composition."
Error Handling in Functions
Robust error handling separates professional functions from amateur scripts. PowerShell provides multiple mechanisms for catching, handling, and reporting errors, each appropriate for different scenarios. Understanding when to terminate execution versus when to continue processing is crucial for creating reliable automation.
PowerShell distinguishes between terminating errors that stop execution immediately and non-terminating errors that report problems but allow the function to continue. The $ErrorActionPreference variable and -ErrorAction parameter control how your function responds to errors.
function Copy-FileWithRetry {
[CmdletBinding(SupportsShouldProcess=$true)]
param(
[Parameter(Mandatory=$true)]
[ValidateScript({Test-Path $_ -PathType Leaf})]
[string]$Source,
[Parameter(Mandatory=$true)]
[string]$Destination,
[ValidateRange(1, 10)]
[int]$MaxRetries = 3,
[ValidateRange(1, 60)]
[int]$RetryDelaySeconds = 5
)
$attempt = 0
$success = $false
while (-not $success -and $attempt -lt $MaxRetries) {
$attempt++
try {
Write-Verbose "Attempt $attempt of $MaxRetries"
if ($PSCmdlet.ShouldProcess($Destination, "Copy file from $Source")) {
Copy-Item -Path $Source -Destination $Destination -ErrorAction Stop
$success = $true
Write-Verbose "Successfully copied file to $Destination"
[PSCustomObject]@{
Source = $Source
Destination = $Destination
Attempts = $attempt
Success = $true
Timestamp = Get-Date
}
}
}
catch {
Write-Warning "Attempt $attempt failed: $($_.Exception.Message)"
if ($attempt -lt $MaxRetries) {
Write-Verbose "Waiting $RetryDelaySeconds seconds before retry"
Start-Sleep -Seconds $RetryDelaySeconds
}
else {
Write-Error "Failed to copy file after $MaxRetries attempts"
[PSCustomObject]@{
Source = $Source
Destination = $Destination
Attempts = $attempt
Success = $false
Error = $_.Exception.Message
Timestamp = Get-Date
}
}
}
}
}This function demonstrates comprehensive error handling with retry logic. The try-catch block captures errors, the -ErrorAction Stop parameter converts non-terminating errors to terminating ones, and the function provides detailed feedback about success or failure.
Custom Error Messages and Validation
Creating meaningful error messages helps users understand what went wrong and how to fix it. The throw statement generates terminating errors with custom messages, while Write-Error creates non-terminating errors that allow continued execution.
function Invoke-DatabaseBackup {
[CmdletBinding()]
param(
[Parameter(Mandatory=$true)]
[string]$ServerInstance,
[Parameter(Mandatory=$true)]
[string]$Database,
[Parameter(Mandatory=$true)]
[ValidateScript({
if (-not (Test-Path $_ -PathType Container)) {
throw "Backup directory '$_' does not exist or is not accessible"
}
$testFile = Join-Path $_ "writetest_$(Get-Random).tmp"
try {
[System.IO.File]::Create($testFile).Close()
Remove-Item $testFile -Force
return $true
}
catch {
throw "Backup directory '$_' is not writable: $($_.Exception.Message)"
}
})]
[string]$BackupDirectory
)
$backupFileName = "$Database`_$(Get-Date -Format 'yyyyMMdd_HHmmss').bak"
$backupPath = Join-Path $BackupDirectory $backupFileName
try {
Write-Verbose "Starting backup of database '$Database' on server '$ServerInstance'"
$query = @"
BACKUP DATABASE [$Database]
TO DISK = N'$backupPath'
WITH NOFORMAT, NOINIT,
NAME = N'$Database-Full Database Backup',
SKIP, NOREWIND, NOUNLOAD, STATS = 10
"@
Invoke-Sqlcmd -ServerInstance $ServerInstance -Query $query -ErrorAction Stop
if (Test-Path $backupPath) {
$backupFile = Get-Item $backupPath
[PSCustomObject]@{
Database = $Database
ServerInstance = $ServerInstance
BackupFile = $backupPath
SizeMB = [math]::Round($backupFile.Length / 1MB, 2)
StartTime = $backupFile.CreationTime
Success = $true
}
}
else {
throw "Backup completed but file was not found at expected location: $backupPath"
}
}
catch {
Write-Error "Database backup failed: $($_.Exception.Message)"
[PSCustomObject]@{
Database = $Database
ServerInstance = $ServerInstance
BackupFile = $backupPath
Success = $false
Error = $_.Exception.Message
}
}
}"Error handling isn't about preventing failures—failures are inevitable. It's about failing gracefully, providing actionable information, and ensuring your automation can recover or alert appropriately when things go wrong."
Documentation and Help Content
Professional functions include comprehensive help content that appears when users run Get-Help against your function. Comment-based help uses special keywords in comments to define help sections that PowerShell automatically formats and displays.
Help content should include a synopsis, detailed description, parameter descriptions, usage examples, and related links. This documentation transforms your function from a personal tool into a shareable resource that others can confidently use.
function Set-FilePermission {
<#
.SYNOPSIS
Modifies NTFS permissions on files and folders.
.DESCRIPTION
The Set-FilePermission function provides a simplified interface for modifying
NTFS permissions on filesystem objects. It supports adding, removing, or replacing
permissions for specified users or groups with common permission levels.
This function requires administrative privileges to modify permissions on system
files and folders. It supports both files and directories, with optional recursion
for directory trees.
.PARAMETER Path
Specifies the path to the file or folder. Wildcards are not supported.
The path must exist and be accessible.
.PARAMETER Identity
Specifies the user or group account to modify. Use the format DOMAIN\Username
or COMPUTERNAME\Username for domain or local accounts respectively.
.PARAMETER Permission
Specifies the permission level to grant. Valid values are:
- ReadOnly: Read and execute permissions
- Modify: Read, write, and execute permissions
- FullControl: Complete control over the object
.PARAMETER Action
Specifies what to do with the permission. Valid values are:
- Add: Adds the permission while preserving existing permissions
- Remove: Removes the specified permission
- Replace: Removes all existing permissions and applies only the specified permission
.PARAMETER Recurse
When specified, applies permissions recursively to all child items in a directory.
Only applicable when Path points to a directory.
.EXAMPLE
Set-FilePermission -Path "C:\Data\Reports" -Identity "DOMAIN\ReportUsers" -Permission ReadOnly -Action Add
Adds read-only permissions for DOMAIN\ReportUsers to the Reports folder.
.EXAMPLE
Set-FilePermission -Path "C:\Shared\Projects" -Identity "DOMAIN\ProjectTeam" -Permission Modify -Action Replace -Recurse
Replaces all permissions on the Projects folder and its contents with Modify permissions
for DOMAIN\ProjectTeam.
.EXAMPLE
Get-ChildItem "C:\Logs" -Directory | ForEach-Object {
Set-FilePermission -Path $_.FullName -Identity "DOMAIN\LogReaders" -Permission ReadOnly -Action Add
}
Adds read-only permissions for DOMAIN\LogReaders to all subdirectories in C:\Logs.
.NOTES
Author: IT Operations Team
Requires: PowerShell 5.1 or later, Administrative privileges
.LINK
Get-Acl
.LINK
Set-Acl
#>
[CmdletBinding(SupportsShouldProcess=$true, ConfirmImpact='High')]
param(
[Parameter(Mandatory=$true, ValueFromPipeline=$true, ValueFromPipelineByPropertyName=$true)]
[ValidateScript({Test-Path $_})]
[Alias('FullName')]
[string]$Path,
[Parameter(Mandatory=$true)]
[string]$Identity,
[Parameter(Mandatory=$true)]
[ValidateSet('ReadOnly', 'Modify', 'FullControl')]
[string]$Permission,
[Parameter(Mandatory=$true)]
[ValidateSet('Add', 'Remove', 'Replace')]
[string]$Action,
[switch]$Recurse
)
Process {
# Function implementation here
Write-Verbose "Processing path: $Path"
Write-Verbose "Identity: $Identity, Permission: $Permission, Action: $Action"
}
}
This comprehensive help block demonstrates all major help sections. Users can now run Get-Help Set-FilePermission to see formatted documentation, including syntax, parameter descriptions, and practical examples. The help content makes the function self-documenting and significantly more accessible to other administrators.
Performance Optimization Techniques
Performance matters when functions process large datasets or run frequently in automated workflows. PowerShell offers several techniques for optimizing function execution, from choosing efficient cmdlets to minimizing pipeline overhead and leveraging .NET methods when appropriate.
The most impactful optimization is often reducing the number of pipeline iterations. Instead of piping objects through multiple ForEach-Object calls, consider using .NET methods or collecting objects once and processing them efficiently. Similarly, prefer Where-Object with optimized filtering over multiple pipeline stages.
- 🚀 Use .NET methods for simple operations - Direct .NET calls often outperform cmdlets for basic tasks like string manipulation or file operations
- 🚀 Minimize pipeline stages - Each pipeline stage adds overhead; combine operations when possible
- 🚀 Filter early in the pipeline - Reduce the dataset as early as possible to minimize processing downstream
- 🚀 Avoid repeated cmdlet calls in loops - Call cmdlets once and store results rather than repeatedly querying
- 🚀 Use ArrayList or Generic Lists for growing collections - Arrays have poor performance when repeatedly adding elements
function Find-LargeFiles {
[CmdletBinding()]
param(
[Parameter(Mandatory=$true)]
[string]$Path,
[ValidateRange(1, 10000)]
[int]$MinimumSizeMB = 100,
[int]$TopCount = 50
)
Write-Verbose "Scanning directory: $Path"
Write-Verbose "Minimum file size: $MinimumSizeMB MB"
$minimumBytes = $MinimumSizeMB * 1MB
# Efficient approach: single Get-ChildItem call with filtering
$files = Get-ChildItem -Path $Path -File -Recurse -ErrorAction SilentlyContinue |
Where-Object { $_.Length -ge $minimumBytes } |
Sort-Object Length -Descending |
Select-Object -First $TopCount
# Use ArrayList for efficient collection building
$results = [System.Collections.ArrayList]::new()
foreach ($file in $files) {
$null = $results.Add([PSCustomObject]@{
FileName = $file.Name
Directory = $file.DirectoryName
SizeMB = [math]::Round($file.Length / 1MB, 2)
SizeGB = [math]::Round($file.Length / 1GB, 3)
Created = $file.CreationTime
Modified = $file.LastWriteTime
Extension = $file.Extension
})
}
Write-Verbose "Found $($results.Count) files matching criteria"
return $results
}This optimized function demonstrates several performance techniques. It performs filtering in a single pipeline rather than multiple passes, uses comparison operators efficiently, and employs ArrayList for collection building. The result is a function that can scan large directory trees without significant performance degradation.
Testing and Debugging Functions
Professional function development includes systematic testing and debugging. PowerShell provides built-in debugging capabilities through breakpoints, step execution, and variable inspection. Additionally, Pester—PowerShell's testing framework—enables automated unit and integration testing.
Debugging begins with Write-Verbose and Write-Debug statements strategically placed throughout your function. These statements provide runtime visibility without cluttering normal output. Users can enable verbose output with the -Verbose parameter to see detailed execution information.
function Test-NetworkPort {
[CmdletBinding()]
param(
[Parameter(Mandatory=$true)]
[string]$ComputerName,
[Parameter(Mandatory=$true)]
[ValidateRange(1, 65535)]
[int]$Port,
[ValidateRange(100, 10000)]
[int]$TimeoutMs = 2000
)
Write-Verbose "Testing connection to $ComputerName on port $Port"
Write-Debug "Timeout set to $TimeoutMs milliseconds"
try {
$tcpClient = New-Object System.Net.Sockets.TcpClient
$connection = $tcpClient.BeginConnect($ComputerName, $Port, $null, $null)
Write-Debug "Connection attempt initiated"
$wait = $connection.AsyncWaitHandle.WaitOne($TimeoutMs, $false)
if (-not $wait) {
Write-Verbose "Connection attempt timed out"
$tcpClient.Close()
return [PSCustomObject]@{
ComputerName = $ComputerName
Port = $Port
IsOpen = $false
ResponseTime = $null
Status = 'Timeout'
}
}
try {
$tcpClient.EndConnect($connection)
Write-Verbose "Successfully connected to port $Port"
[PSCustomObject]@{
ComputerName = $ComputerName
Port = $Port
IsOpen = $true
ResponseTime = $TimeoutMs
Status = 'Open'
}
}
catch {
Write-Verbose "Port $Port is closed or filtered"
[PSCustomObject]@{
ComputerName = $ComputerName
Port = $Port
IsOpen = $false
ResponseTime = $null
Status = 'Closed'
}
}
finally {
$tcpClient.Close()
Write-Debug "TCP client connection closed"
}
}
catch {
Write-Error "Error testing port: $($_.Exception.Message)"
[PSCustomObject]@{
ComputerName = $ComputerName
Port = $Port
IsOpen = $false
ResponseTime = $null
Status = "Error: $($_.Exception.Message)"
}
}
}This function includes comprehensive verbose and debug output. During development, you can run it with -Verbose to see execution flow or -Debug to pause at debug statements and inspect variables. This layered debugging approach helps identify issues without modifying the function's core logic.
Best Practices and Professional Standards
Professional PowerShell functions follow established conventions that make them predictable, maintainable, and compatible with the broader PowerShell ecosystem. These practices aren't arbitrary rules—they emerge from years of community experience and Microsoft's own design guidelines.
Naming conventions matter significantly. Functions should use approved PowerShell verbs (Get, Set, New, Remove, etc.) followed by a singular noun. This pattern immediately communicates what the function does and maintains consistency with native cmdlets. The Get-Verb cmdlet shows all approved verbs with their intended meanings.
Essential Function Development Standards
Naming and Structure: Always use approved verbs and singular nouns. Avoid abbreviations except for well-known acronyms. Keep function names descriptive but concise. Use PascalCase for function names and parameters.
Parameter Design: Make common scenarios easy with sensible defaults. Use parameter validation to prevent invalid input rather than checking manually in code. Provide meaningful parameter names that clearly indicate their purpose. Group related parameters into parameter sets when functions support multiple usage patterns.
Output Consistency: Return the same object type regardless of success or failure conditions. Use custom objects with consistent property names. Avoid mixing output types or returning strings when objects are more appropriate. Enable pipeline processing for functions that operate on collections.
Error Handling: Use terminating errors for conditions that prevent function completion. Use non-terminating errors for issues that don't prevent processing remaining items. Provide clear, actionable error messages. Include relevant details in error records for troubleshooting.
Documentation: Include comment-based help for every exported function. Provide at least three examples showing common usage patterns. Document any prerequisites, dependencies, or required permissions. Explain what the function returns and in what format.
Module Organization and Distribution
As your function library grows, organizing functions into modules becomes essential. Modules provide namespacing, version management, and distribution mechanisms. A well-structured module includes a manifest file (.psd1) that defines metadata, version information, and exported functions.
# File: MyITTools.psm1
# Private helper function (not exported)
function Test-AdminPrivilege {
$currentPrincipal = New-Object Security.Principal.WindowsPrincipal([Security.Principal.WindowsIdentity]::GetCurrent())
return $currentPrincipal.IsInRole([Security.Principal.WindowsBuiltInRole]::Administrator)
}
# Public function (will be exported)
function Get-SystemReport {
[CmdletBinding()]
param(
[string]$ComputerName = $env:COMPUTERNAME,
[switch]$IncludeServices
)
if (-not (Test-AdminPrivilege)) {
Write-Warning "This function works best when run with administrative privileges"
}
# Implementation here
}
# Public function (will be exported)
function Set-SystemConfiguration {
[CmdletBinding(SupportsShouldProcess=$true)]
param(
[Parameter(Mandatory=$true)]
[hashtable]$Configuration
)
if (-not (Test-AdminPrivilege)) {
throw "This function requires administrative privileges"
}
# Implementation here
}
# Export only public functions
Export-ModuleMember -Function Get-SystemReport, Set-SystemConfigurationThe module structure separates public functions (exported) from private helper functions (internal only). This encapsulation protects implementation details while exposing a clean public interface. The Export-ModuleMember cmdlet explicitly controls what users can access.
How do I make my function accept pipeline input?
Add the ValueFromPipeline=$true attribute to your parameter and implement Begin, Process, and End blocks. The Process block executes once for each pipeline object, while Begin runs once at the start and End runs once after all input is processed. This pattern enables efficient streaming of data through your function.
What's the difference between [CmdletBinding()] and advanced functions?
Adding [CmdletBinding()] above your param block transforms a simple function into an advanced function. This unlocks automatic common parameters like -Verbose, -Debug, -ErrorAction, and -WarningAction. It also enables features like ShouldProcess for -WhatIf support and parameter validation attributes. Without [CmdletBinding()], these features aren't available.
When should I use parameter sets versus separate functions?
Use parameter sets when different parameter combinations represent alternative ways to accomplish the same task. For example, searching for a user by username, email, or employee ID. Use separate functions when the operations are fundamentally different, even if they're related. Parameter sets keep related functionality together while preventing invalid parameter combinations.
How can I validate that a file path parameter exists?
Use the [ValidateScript()] attribute with Test-Path: [ValidateScript({Test-Path $_ -PathType Leaf})] for files or [ValidateScript({Test-Path $_ -PathType Container})] for directories. The script block receives the parameter value as $_ and must return $true for validation to pass. You can throw custom error messages within the script block to provide helpful feedback when validation fails.
Should I use Write-Output or just output objects directly?
In most cases, simply output objects without Write-Output. PowerShell automatically sends any unassigned objects to the output stream. Write-Output is functionally identical but adds unnecessary verbosity. Use it only when you want to explicitly document that you're sending something to output, or when you need to distinguish output from other streams in complex scenarios.
How do I handle credentials securely in function parameters?
Use the [PSCredential] type for credential parameters. This ensures credentials are stored securely and prevents accidental exposure in logs or console output. Users can pass credentials using Get-Credential or by creating PSCredential objects. Never accept passwords as plain strings—always use the PSCredential type which protects the password as a SecureString.