• List all VM Instances within an OCI tenant using PowerShell πŸ’»

    I’ve been working with a customer that’s automation tool of choice is PowerShell…..the good news for them was that OCI provides PowerShell modules!

    This means that customers can in theory do everything in PowerShell that they can with the OCI CLI.

    The following guide steps through the process of setting up the PowerShell modules for OCI – OCI Modules for PowerShell

    Once I’d got this setup (which wasn’t very painful), one of the first things that I helped them to automate is producing a list of all of the VM instances running within their tenancy, I’ve included the code for this below:

    # Get all VMs
    $Compartments = Get-OCIIdentityCompartmentsList -CompartmentId ocid1.tenancy.oc1.. -CompartmentIdInSubtree $true -LifecycleState Active
    
    Foreach ($Compartment in $Compartments) 
    {
    Write-Host "Compartment Name:" $Compartment.Name -ForegroundColor Green
    $Instances = Get-OCIComputeInstancesList -CompartmentId $Compartment.Id
    Foreach ($Instance in $Instances)
        {
            Write-Host "-Instance:" $Instance.DisplayName -ForegroundColor White
        }
    }
    

    This loops through every Compartment from the root Compartment downwards and lists the VM instances within each of these Compartments.

    The only thing that needs to be updated prior to running this script is the OCID of the root compartment (CompartmentId parameter).

    Here is the output from my tenancy:

    You can see all of the Compartments within the tenancy and the 3 x VM instances that I have:

    • N8N
    • Streamlit
    • Hub-VM

  • Installing a desktop environment on a Linux VM hosted in OCI and making this available using RDP πŸ–₯️

    Next up in random things Brendan has done……installing a desktop environment (Gnome) on a Linux instance (Ubuntu) hosted in OCI and making this available via Remote Desktop Protocol (RDP) with xrdp – it sounds quite complicated but there isn’t that much to getting it up and running βœ….

    Basically, I wanted a VM that I can RDP to from anywhere….and any computer, importantly! To do basic coding (is in my coding is all basic πŸ˜€) using Visual Studio Code and Python.

    To keep the costs down (I’m a tight Yorkshireman after all) I’m using an Always Free Ampere A1 VM instance running in OCI – so this will not cost me a penny to run πŸ™Œ.

    To learn more about the OCI Always Free resources, check this article out.

    To get started, I created a Linux instance using Ubuntu 24.04:

    I placed this into a Public Subnet within a Virtual Cloud Network, to learn more about how to do this, check this guide out – the reason for placing the VM into a Public Subnet is so that it gets a public IP address and I can connect to this directly over the Internet, without requiring that a VPN or FastConnect be in-place.

    Once the VM had been provisioned, I SSH’d onto the VM instance (if you are not sure how to do this, check this guide out) and then ran the following commands in order:

    Update and Upgrade Installed Packages

    sudo apt update && sudo apt upgrade -y
    
    

    Install Ubuntu Desktop

    sudo apt install ubuntu-desktop -y
    

    Install xrdp

    sudo apt install xrdp -y
    

    Ensure that Gnome runs (the Ubuntu Desktop Environment) when logging in via RDP

    echo "gnome-session" > ~/.xsession
    

    Restart xrdp

    sudo systemctl restart xrdp
    

    Permit inbound traffic on TCP port 3389 (the port used by RDP)

    sudo iptables -I INPUT 4 -m state --state NEW -p tcp --dport 3389 -j ACCEPT
    sudo netfilter-persistent save
    
    

    Set a password for the user “ubuntu” by default OCI configures the VM instance to authenticate the ubuntu user using SSH keys, for RDP you’ll need to use a password – you may prefer to use a separate non-root account for this.

    sudo passwd ubuntu
    

    Once those commands have been run, the final thing you’ll need to do is ensure that any Security Lists OR Network Security Groups (NSGs) that the VM instance is associated with permit inbound access to port 3389 – the port used by RDP.

    More info on this (including how to do this) can be found here.

    Here is how my Security List looks (there isn’t an NSG associated with my VM instance).

    WARNING: This gives any machine on the Internet (source CIDR 0.0.0.0/0) access to this VM instance…..and any other resources in the subnet via RDP – port 3389! You’d likely want to restrict this to specific IP addresses or IP address ranges e.g. the public IP address you break out from your house/office to prevent any randomer on the Internet getting access.

    Once the Security List had been updated. I fired up the Microsoft RDP client (other RDP clients are available!) and configured it to connect to the public IP address of the VM instance and VoilΓ  – I now have access to the desktop on my Ubuntu VM instance from anywhere.

  • OCI Gen AI Agents: From Zero to Hero πŸ¦Έ

    I’ve just (literally) delivered a session at the Oracle User Group UK 2025 Conference and have posted a recording of the session.

    This demo-heavy session provides a high-level overview of the OCI Generative AI Agents service and walks through the process of creating an agent and configuring itΒ toΒ address common use-cases, this session was based on real-world customer experience rather than theoretical capabilities – which always help to bring things to life!

  • Deploying the OCI Landing Zone into the UK Sovereign Cloud (OC4) πŸ‡¬πŸ‡§

    If you are unsure what the OCI UK Sovereign Cloud in, please check this out.

    This week I’ve been helping a customer to deploy an OCI Landing Zone (the One Operating Entity variant) to their tenancy using Terraform, we ran into a couple of issues that I wanted to document and hopefully help others.

    This is caused by two of the Terraform input configuration files having some hardcoded references to the OCI Commercial Cloud (OC1) rather than the UK Sovereign Cloud (OC4), which need to be updated for the configuration to apply correctly – otherwise the terraform apply command will fail.

    Issue 1 ❌ – oci_open_lz_one-oe_iam.auto.tfvars.json has references to the services highlighted in the screenshot below:

    To resolve this, replace Fssoc1Prod, objectstorage-eu-frankfurt-1 with Fssoc4Prod, objectstorage-uk-gov-london-1

    The file should then look like this:

    Issue 2 ❌ – oci_open_lz_one-oe_security_cisl1.auto.tfvars.json has 40 references to Security Policies using their actual OCIDs from OC1 – Commercial (see examples below):

    The easiest way to fix this is by doing a find and replace of all instances of .oc1.. replacing this with .oc4..

    Which should then look something like this:

    Note❗️- If you are using the CIS2 version of this configuration file instead of the CIS1 version (as I used) you will also need to make these changes.

    That’s it!

  • Deploying an OCI Landing Zone using Terraform πŸ›©οΈ

    OCI has a number of Terraform based Landing Zone blueprints available.

    The One OE (Operating Entity) OCI LZ blueprint can be deployed to an OCI tenancy directly from GitHub using the “Deploy to OCI” button:

    This then uses OCI Resource Manager to deploy the blueprint to a tenancy – which uses Terraform under the hood.

    I wanted to deploy the One OE blueprint to one of my test tenancies, however I wanted to do this natively using Terraform from my local machine rather than using OCI Resource Manager, mainly due to the additional flexibility and ease of troubleshooting that this approach provides.

    It took me a while to figure out exactly how to do this (with a lot of help from one of the OCI LZ Black Belts πŸ₯‹).

    I’ve documented the process that I followed below, hopefully it saves somebody else some time ⌚️.

    βœ… Step 0Make sure you have Terraform and Git installed (I’m assuming that you already have these installed locally).

    βœ… Step 1 – Create a directory to store the blueprints and configuration

    I created a folder aptly named “OCI One OE Landing Zone

    …then opened a terminal and ran the following commands from within this folder:

    git clone https://github.com/oci-landing-zones/oci-landing-zone-operating-entities.git
    git clone https://github.com/oci-landing-zones/terraform-oci-modules-orchestrator.git

    These commands download the OCI OE Landing Zone blueprints and the Landing Zone Orchestrator.

    Once the downloads have completed, the folder should look something like this:

    βœ… Step 2 – Configure Authentication

    Grab a copy of the file oci-credentials.tfvars.json.template, which is located within the folder OCI One OE Landing Zone/oci-landing-zone-operating-entities/commons/content.

    Take a copy of this file, place it in the root of the OCI One OE Landing Zone folder that you just created and rename the file to oci-credentials.tfvars.json

    Open the oci-credentials.tfvars.json file and populate with your authentication information, if you don’t have this please follow the guide here to create an API Signing Key and obtain the other required information.

    Here’s an example of what mine looks like:

    βœ… Step 3 – Grab a copy of the required configuration files

    In order to deploy the One OE Landing Zone, a number of configuration files are required, these can be found within the following folder:

    ‘OCI One OE Landing Zone/oci-landing-zone-operating-entities/blueprints/one-oe/runtime/one-stack’

    • oci_open_lz_one-oe_governance.auto.tfvars.json
    • oci_open_lz_one-oe_iam.auto.tfvars.json
    • oci_open_lz_one-oe_security_cisl1.auto.tfvars.json
    • oci_open_lz_hub_a_network_light.auto.tfvars.js
    • oci_open_lz_one-oe_observability_cisl1.auto.tfvars.json

    Copy these files into the root of the OCI One OE Landing Zone folder – you can leave them in their original location, but by taking a copy this means that you can edit them (if needed) and easy return them to their “vanilla” state by re-copying across from their original location.

    βœ… Step 4 – Time to deploy πŸš€

    Run the following command from within the OCI One OE Landing Zone/terraform-oci-modules-orchestrator folder to download the required Terraform Providers and Modules

    terraform init

    Once this has completed, run terraform plan (from the same folder), referencing the required configuration files:

    terraform plan \
    -var-file ../oci-credentials.tfvars.json \
    -var-file ../oci_open_lz_one-oe_governance.auto.tfvars.json \
    -var-file ../oci_open_lz_one-oe_iam.auto.tfvars.json \
    -var-file ../oci_open_lz_hub_a_network_light.auto.tfvars.json \
    -var-file ../oci_open_lz_one-oe_security_cisl1.auto.tfvars.json \
    -var-file ../oci_open_lz_one-oe_observability_cisl1.auto.tfvars.json 

    ….if all goes well, you can run terraform apply (from the same folder) using the exact same configuration files.

    terraform apply \
    -var-file ../oci-credentials.tfvars.json \
    -var-file ../oci_open_lz_one-oe_governance.auto.tfvars.json \
    -var-file ../oci_open_lz_one-oe_iam.auto.tfvars.json \
    -var-file ../oci_open_lz_hub_a_network_light.auto.tfvars.json \
    -var-file ../oci_open_lz_one-oe_security_cisl1.auto.tfvars.json \
    -var-file ../oci_open_lz_one-oe_observability_cisl1.auto.tfvars.json 

    Within a few minutes, you should (hopefully!) have a beautiful OCI Landing Zone deployed within your tenancy.

  • Why isn’t DHCP working on the secondary VNIC of an OCI VM instance? βŒ

    Every day is a school day – especially with OCI!

    I was recently playing around in my lab and needed to add a secondary VNIC to one of my VMs for some testing that I was doing.

    I quickly set about adding a secondary VNIC and used the default option of assigning an IP address automatically using DHCP rather than specifying a static IP address (I’m lazy, I know!).

    I gave the server a reboot, logged in and to my surprise the shiny new secondary VNIC had acquired a nasty APIPA address (169.x.x.x) rather than the dynamic IP address that OCI had assigned (10.0.1.69) ❌:

    What is an APIPA address you may ask:

    “An APIPA (Automatic Private IP Addressing) IP address isΒ a self-assigned address in the 169.254.x.x range that a device uses when it cannot get an IP address from a DHCP server.Β This feature allows devices on a local network to communicate with each other even when the DHCP server is down, providing basic connectivity”

    I deleted and re-added the VNIC, rebooted the server more times that I care to admit – but still nothing, I couldn’t get rid of this pesky APIPA IP address and get the “real” IP address that OCI had assigned (10.0.1.69).

    After realising I’d sunk far too much time on this, I reached out to a colleague who is a networking whizz in OCI who informed me that OCI will only use DHCP for the primary VNIC on VM instances – for any secondary VNICs that you add to a VM instance, these must be configured with a static IP address (why oh why didn’t I ask them sooner 😫).

    This is quite confusing as the OCI console allows you to add a secondary VNIC and specify DHCP – it just doesn’t work πŸ€¦β€β™‚οΈ.

    It will even display the “dynamic” IP address that has been assigned to the instance in the console – it just won’t be picked up by the underling OS on the VM instance as DHCP doesn’t work:

    Moral of the story, when adding a secondary VNIC (or tertiary for that matter) use static IP addressing βœ….

    Note that whilst this affected a Windows Server in my case, this also applies to Linux too.

    Hopefully my pain, will help somebody else in the future!

  • Terraform Destroy πŸ—‘οΈ, 409 Conflict Error when deleting an OCI Subnet πŸ›œ

    I was playing around with Terraform in my lab the other day and attempted to run a destroy operation to tear down everything I’d built – this was to avoid any unnecessary charges for resources that I’m not actively using in my test tenancy πŸ’·.

    The destroy operation kept failing with a 409-Conflict error which stated that the subnet it was trying to delete had references to a VNIC, this made no sense at all as everything had been provisioned with Terraform…..and the VM instances attached to the subnet had been deleted earlier in the destroy operation 😀.

    I figured out what was actualluy blocking the deletion…..it was a VNIC attached to the subnet, however it wasn’t a VNIC that I (or Terraform!) had created.

    As part of the testing I’d done post-build, I had attached the Cloud Shell to a Virtual Cloud Network & Subnet – this enabled me to SSH into a VM instance that didn’t have a public IP address assigned (as I’ve previously wrote about here).

    The fix for this was simple, I just needed to close the Cloud Shell session (which detaches the VNIC from the subnet) and retry the destroy operation – which worked this time βœ….

  • Fun and games with OCI DRGs, RPCs and VPNs – attempting to connect from On-Prem to a peered tenant πŸ”Œ

    This is probably my most niche-ist post ever – however if it helps at least one person then it was worth writing up!

    I have an OCI Tenant (Tenant A), which has a Site-to-Site VPN connection configured between my On-Premises network (my house 🏠) and Tenant A, this enables me to connect to resources within my tenant using their private IP addresses rather than using a Bastion/Jump Server – for example I can SSH directly into VM instances.

    This has worked perfectly well for the last couple of years, recently I provisioned a second OCI Tenant (Tenant B) and I wanted to configure connectivity between Tenant A and B, after some research I selected the option of connecting the networks in Tenant A and Tenant B using a Remote Peering Connection (RPC) between the Dynamic Routing Gateways (DRGs) in each tenancy.

    There are two other options to achieve this, however as I like a challenge, I picked the most difficult of the three options – this is also because the customer I’m working with, will likely choose the RPC option too.

    To set this up, I used the step-by-step guide available here, which is excellent – I found it far better than the official documentation.

    Once I had this all setup, I had the following architecture.

    Based on my initial testing I could now do the following:

    • Connect from my On-Premises network to resources in Tenant A βœ…
    • Connect from Tenant A to resources in Tenant B βœ…
    • Connect from Tenant B to resources in Tenant A βœ…

    I couldn’t however connect from On-Premises to resources in Tenant B ❌.

    In the real-world (outside of my lab), it would be essential (in most cases) to have the ability to connect from On-Premises to all OCI tenancies – in particular when they are connected like this.

    After much head-scratching and reading documentation – which is always a last resort! I figure out the problem(s) and managed to resolve the issue with my On-Premises network being unable to connect to Tenant B.

    This was resolved by doing the following in Tenant A (no changes were required for Tenant B).

    • Created 3 x Import Rule Distributions (On-Prem/RPC/VCN).
    • Created 3 x Route Tables (On-Prem/RPC/VCN), associating each of these new Route Tables with the respective new Import Rule Distributions.
    • Associated each Route Table with the respective Attachments (replacing the OOTB configuration).
      • On-Prem > IPSec Tunnel Attachment
      • RPC > Remote Peering Connection Attachment
      • VCN > VCN Attachments

    Here are the Import Route Distributions that I needed to create:

    On-Prem Import Routes: This will enable On-Prem to see all of the routes from the VCNs and Remote Peering Connection.

    Remote Peering Connection Import Routes: This will enable the RPC to see all of the VCNs and IPSec tunnel (which is the Site to Site VPN).

    VCN Import Routes: This enables all VCNs to see all of the VCNs, RPCs and IPSec Tunnel.

    Here are the Route Tables with the mapping to the Import Rule Distributions On-Prem/RPC/VCN:

    Here are the Attachments with the association to the respective Route Tables.

    As a side note, if you are using a FastConnect rather than a Site-to-Site VPN for On-Premises to OCI connectivity the tweaks you’ll need to make to the configuration are:

    • Replace IPSec Tunnel with Virtual Circuit in the Import Rules
    • The On-Prem Route Table should be associated with the Virtual Circuit Attachment rather than IPSec Tunnel Attachment.

  • OCI Generative AI Agent returns a “NotAuthorizedOrNotFound” error when invoking a SQL tool βŒ

    If you run into the following error when using an OCI Generative AI Agent that attempts to use a SQL Tool

    User Error: Failed to execute DB query with Error – NotAuthorizedOrNotFound: Authorization failed or requested resource not found with http status 404

    If you are like me, the reason for this error is that you didn’t read the manual 🀦, this error is typically returned because the Generative AI Agent service does not have permission to the Database Connection and Key Vault – which is required to connect to the database and run the query generated by the agent.

    The fix for this is to create a policy that grants the necessary permissions to the Generative AI Agents service – which is documented here (below for reference too).

    Allow any-user to use database-tools-connections in compartment <compartment-name> where request.principal.type='genaiagent'
    
    Allow any-user to read database-tools-family in compartment <compartment-name> where request.principal.type='genaiagent'
    
    Allow any-user to read secret-family in compartment <compartment-name> where request.principal.type='genaiagent'
  • Using model aliases in OCI Gen AI πŸ§ 

    One thing I’ve been caught out with in the past with OCI Gen AI is when an AI model gets retired and my apps that specifically call the model start to fail as the model is no longer available!

    The fix for this isn’t particularly difficult, it’s just a case of updating the code to point to the new model name (via model_id), this can be quite stressful though when you are about to deliver a demo to a customer 😫.

    I was really pleased to see the introduction of model aliases (Cohere-only at this time), so rather than using a hardcoded reference to a specific model version you can now use the following aliases, which will always point to the latest version of the Cohere Command R and Coheren Command R+ models.

    cohere.command-latest points to cohere.command-r-08-20204
    cohere.command-plus-latest points to cohere.command-r-plus-08-2024

    Full details are included in the documentation πŸ“–.