My challenge is that I’m used to developing scripts locally with an account that has privileged access. The development time w/ debugging is fast, as the account has immediate access. With Azure Automation, I have to wait for a cloud job to complete, and sometimes this takes a minute or two. That’s too long for me to execute and wait for the results.
I would rather develop locally using a privileged account and then push to Azure Automation when I’m confident that my script logic is executing as expected.
I think I found a way around the issue. In my script logic I can test for the PowerShell profile path. In Azure Automation, the profile path references ‘ContainerUser’. When running locally my profile path references my local directory. If the profile path references ContainerUser then I can specify to use the user-managed identity; otherwise, I’ll use my interactive credentials, e.g. a PowerShell session that I have previously established locally with Exchange Online.
Makes sense. I found an environment variable that detects whether the process is running in Azure Automation, i.e. it’s running in Azure Automation if the variable is defined:
Get-ChildItem -Path env:AZUREPS_HOST_ENVIRONMENT
This helped me provide some conditional control on when to use the managed identity and when to use my interactive credentials.
All the while I’m figuring out that using the Azure Automation plugin with VS Code is only useful for publishing code in runbooks; the extension doesn’t provide an easy way to manage custom modules. And with the code I’m writing, I’m quickly finding that it won’t be efficient to include everything in runbook files. So I’m now heading down the path of using a pipeline to publish my custom module to Azure Automation, then calling that module with a lightweight runbook.
Appreciate the guidance!