This repository provides infrastructure-as-code examples to automate the creation of virtual machine images and their guest operating systems on VMware vSphere using HashiCorp Packer and the Packer Plugin for VMware vSphere (vsphere-iso
). All examples are authored in the HashiCorp Configuration Language ("HCL2").
Use of this project is mentioned in the VMware Validated Solution: Private Cloud Automation for VMware Cloud Foundation authored by the maintainer. Learn more about this solution at vmware.com/go/vvs.
By default, the machine image artifacts are transferred to a vSphere Content Library as an OVF template and the temporary machine image is destroyed. If an item of the same name exists in the target content library, Packer will update the existing item with the new version of OVF template.
The following builds are available:
- VMware Photon OS 4
- Ubuntu Server 22.04 LTS
- Ubuntu Server 20.04 LTS
- Ubuntu Server 18.04 LTS
- Red Hat Enterprise Linux 8 Server
- Red Hat Enterprise Linux 7 Server
- AlmaLinux OS 8
- Rocky Linux 8
- CentOS Stream 8
- CentOS Linux 8
- CentOS Linux 7
- Microsoft Windows Server 2022 - Standard and Datacenter
- Microsoft Windows Server 2019 - Standard and Datacenter
- Microsoft Windows Server 2016 - Standard and Datacenter
- Microsoft Windows 11
- Microsoft Windows 10
Note
Guest customization is not currently supported for AlmaLinux OS and Rocky Linux in vCenter Server 7.0 Update 3.
The Microsoft Windows 11 machine image uses a virtual trusted platform module (vTPM). Refer to the VMware vSphere product documenation for requirements and pre-requisites.
The Microsoft Windows 11 machine image is not transferred to the content library by default. It is not supported to clone an encrypted virtual machine to the content library as an OVF Template. You can adjust the common content library settings to use VM Templates.
Operating Systems:
-
VMware Photon OS 4
-
Ubuntu Server 22.04 LTS and 20.04 LTS
-
macOS Monterey and Big Sur (Intel)
Note
Operating systems and versions tested with the project.
Packer:
-
HashiCorp Packer 1.8.2 or higher.
Note
Click on the operating system name to display the installation steps.
-
Photon OS
PACKER_VERSION="1.8.2" OS_PACKAGES="wget unzip" if [[ $(uname -m) == "x86_64" ]]; then LINUX_ARCH="amd64" elif [[ $(uname -m) == "aarch64" ]]; then LINUX_ARCH="arm64" fi tdnf install ${OS_PACKAGES} -y wget -q https://releases.hashicorp.com/packer/${PACKER_VERSION}/packer_${PACKER_VERSION}_linux_${LINUX_ARCH}.zip unzip -o -d /usr/local/bin/ packer_${PACKER_VERSION}_linux_${LINUX_ARCH}.zip
-
Ubuntu
sudo apt-get update && sudo apt-get install -y gnupg software-properties-common curl curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add - sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main" sudo apt-get update && sudo apt-get install terraform
-
macOS
brew tap hashicorp/tap brew install hashicorp/tap/packer
-
-
HashiCorp Packer Plugin for VMware vSphere (
vsphere-iso
) 1.0.5 or higher. -
Packer Plugin for Windows Updates 0.14.1 or higher - a community plugin for HashiCorp Packer.
Note
Required plugins are automatically downloaded and initialized when using
./build.sh
. For dark sites, you may download the plugins and place these same directory as your Packer executable/usr/local/bin
or$HOME/.packer.d/plugins
.
Additional Software Packages:
The following software packages must be installed on the opearing system running Packer.
Note
Click on the operating system name to display the installation steps.
-
Git command-line tools.
-
Photon OS
tdnf install git
-
Ubuntu
apt-get install git
-
macOS
brew install git
-
-
Ansible 2.9 or higher.
-
Photon OS
tdnf install ansible
-
Ubuntu
apt-get install ansible
-
macOS
brew install ansible
-
-
A command-line .iso creator. Packer will use one of the following:
-
Photon OS
tdnf install xorriso
-
Ubuntu
apt-get install xorriso
-
macOS
hdiutil (native)
-
-
mkpasswd
-
Ubuntu
apt-get install whois
-
macOS
brew install --cask docker
-
-
Coreutils
-
macOS
brew install coreutils
-
-
HashiCorp Terraform 1.2.3 or higher.
-
Photon OS
TERRAFORM_VERSION="1.2.3" OS_PACKAGES="wget unzip" if [[ $(uname -m) == "x86_64" ]]; then LINUX_ARCH="amd64" elif [[ $(uname -m) == "aarch64" ]]; then LINUX_ARCH="arm64" fi tdnf install ${OS_PACKAGES} -y wget -q https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_${LINUX_ARCH}.zip unzip -o -d /usr/local/bin/ terraform_${TERRAFORM_VERSION}_linux_${LINUX_ARCH}.zip
-
Ubuntu
sudo apt-get update && sudo apt-get install terraform
-
macOS
brew install hashicorp/tap/terraform
-
-
Gomplate 3.10.0 or higher.
-
Ubuntu
GOMPLATE_VERSION="3.10.0" sudo curl -o /usr/local/bin/gomplate -sSL https://github.com/hairyhenderson/gomplate/releases/download/v${GOMPLATE_VERSION}/gomplate_linux-${LINUX_ARCH} sudo chmod 755 /usr/local/bin/gomplate
-
macOS
brew install gomplate
-
Platform:
- VMware vSphere 7.0 Update 3 or higher
Download the latest release.
You may also clone main
for the latest prerelease updates.
Example:
git clone https://github.com/vmware-samples/packer-examples-for-vsphere.git
The directory structure of the repository.
├── build.sh
├── config.sh
├── set-envvars.sh
├── LICENSE
├── NOTICE
├── README.md
├── ansible
│ ├── roles
│ │ └── <role>
│ │ ├── defaults
│ │ │ └── main.yml
│ │ ├── files
│ │ │ └── root-ca.cer.example
│ │ ├── handlers
│ │ │ └── main.yml
│ │ ├── meta
│ │ │ └── main.yml
│ │ ├── tasks
│ │ │ └── main.yml
│ │ │ └── *.yml
│ │ └── vars
│ │ └── main.yml
│ ├── ansible.cfg
│ └── main.yml
├── builds
│ ├── ansible.pkvars.hcl.example
│ ├── build.pkvars.hcl.example
│ ├── common.pkvars.hcl.example
│ ├── proxy.pkvars.hcl.example
│ ├── rhsm.pkvars.hcl.example
│ ├── vsphere.pkvars.hcl.example
│ ├── linux
│ │ └── <distribution>
│ │ └── <version>
│ │ ├── *.pkr.hcl
│ │ ├── *.auto.pkrvars.hcl
│ │ └── data
│ │ └── ks.pkrtpl.hcl
│ └── windows
│ └── <distribution>
│ └── <version>
│ ├── *.pkr.hcl
│ ├── *.auto.pkrvars.hcl
│ └── data
│ └── autounattend.pkrtpl.hcl
├── certificates
│ └── root-ca.cer.example
├── manifests
├── scripts
│ └── windows
│ └── *.ps1
└── terraform
│── vsphere-role
└── vsphere-virtual-machine
The files are distributed in the following directories.
ansible
- contains the Ansible roles to prepare a Linux machine image build.builds
- contains the templates, variables, and configuration files for the machine image build.scripts
- contains the scripts to initialize and prepare a Windows machine image build.certificates
- contains the Trusted Root Authority certificates for a Windows machine image build.manifests
- manifests created after the completion of the machine image build.terraform
- contains example Terraform plans to test machine image builds.
Warning
When forking the project for upstream contribution, please be mindful not to make changes that may expose your sensitive information, such as passwords, keys, certificates, etc.
-
Download the x64 guest operating system .iso images.
Linux Distributions:
- VMware Photon OS 4 Server
- Download the 4.0 Rev2 release of the FULL
.iso
image. (e.g.,photon-4.0-xxxxxxxxx.iso
)
- Download the 4.0 Rev2 release of the FULL
- Ubuntu Server 22.04 LTS
- Download the latest LIVE release
.iso
image. (e.g.,ubuntu-22.04.x-live-server-amd64.iso
)
- Download the latest LIVE release
- Ubuntu Server 20.04 LTS
- Download the latest LIVE release
.iso
image. (e.g.,ubuntu-20.04.x-live-server-amd64.iso
)
- Download the latest LIVE release
- Ubuntu Server 18.04 LTS
- Download the latest legacy NON-LIVE release
.iso
image. (e.g.,ubuntu-18.04.x-server-amd64.iso
)
- Download the latest legacy NON-LIVE release
- Red Hat Enterprise Linux 8 Server
- Download the latest release of the FULL
.iso
image. (e.g.,rhel-8.x-x86_64-dvd1.iso
)
- Download the latest release of the FULL
- Red Hat Enterprise Linux 7 Server
- Download the latest release of the FULL
.iso
image. (e.g.,rhel-server-7.x-x86_64-dvd1.iso
)
- Download the latest release of the FULL
- AlmaLinux OS 8
- Download the latest release of the FULL
.iso
image. (e.g.,AlmaLinux-8.x-x86_64-dvd1.iso
)
- Download the latest release of the FULL
- Rocky Linux 8
- Download the latest release of the FULL
.iso
image. (e.g.,Rocky-8.x-x86_64-dvd1.iso
)
- Download the latest release of the FULL
- CentOS Stream 8
- Download the latest release of the FULL
.iso
image. (e.g.,CentOS-Stream-8-x86_64-latest-dvd1.iso
)
- Download the latest release of the FULL
- CentOS Linux 8
- Download the latest release of the FULL
.iso
image. (e.g.,CentOS-8.x.xxxx-x86_64-dvd1.iso
)
- Download the latest release of the FULL
- CentOS Linux 7
- Download the latest release of the FULL
.iso
image. (e.g.,CentOS-7-x86_64-DVD.iso
)
- Download the latest release of the FULL
Microsoft Windows:
- Microsoft Windows Server 2022
- Microsoft Windows Server 2019
- Microsoft Windows Server 2016
- Microsoft Windows 11
- Microsoft Windows 10
- VMware Photon OS 4 Server
-
Obtain the checksum type (e.g.,
sha256
,md5
, etc.) and checksum value for each guest operating system.iso
image from the vendor. This will be use in the build input variables. -
Upload your guest operating system
.iso
images to the ISO datastore and paths that will be used in your variables.Example:
config/common.pkvars.hcl
common_iso_datastore = "sfo-w01-cl01-ds-nfs01"
Example:
builds/<type>/<build>/*.auto.pkvars.hcl
iso_path = "iso/linux/photon" iso_file = "photon-4.0-xxxxxxxxx.iso" iso_checksum_type = "md5" iso_checksum_value = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
Create a custom vSphere role with the required privileges to integrate HashiCorp Packer with VMware vSphere. A service account can be added to the role to ensure that Packer has least privilege access to the infrastructure. Clone the default Read-Only vSphere role and add the following privileges:
Category | Privilege | Reference |
---|---|---|
Content Library | Add library item | ContentLibrary.AddLibraryItem |
... | Update Library Item | ContentLibrary.UpdateLibraryItem |
Datastore | Allocate space | Datastore.AllocateSpace |
... | Browse datastore | Datastore.Browse |
... | Low level file operations | Datastore.Browse |
Network | Assign network | Network.Assign |
Resource | Assign virtual machine to resource pool | Resource.AssignVMToPool |
vApp | Export | vApp.Export |
Virtual Machine | Configuration > Add new disk | VirtualMachine.Config.AddNewDisk |
... | Configuration > Add or remove device | VirtualMachine.Config.AddRemoveDevice |
... | Configuration > Advanced configuration | VirtualMachine.Config.AdvancedConfig |
... | Configuration > Change CPU count | VirtualMachine.Config.CPUCount |
... | Configuration > Change memory | VirtualMachine.Config.Memory |
... | Configuration > Change settings | VirtualMachine.Config.Settings |
... | Configuration > Change Resource | VirtualMachine.Config.Resource |
... | Configuration > Set annotation | VirtualMachine.Config.Annotation |
... | Edit Inventory > Create from existing | VirtualMachine.Inventory.CreateFromExisting |
... | Edit Inventory > Create new | VirtualMachine.Inventory.Create |
... | Edit Inventory > Remove | VirtualMachine.Inventory.Delete |
... | Interaction > Configure CD media | VirtualMachine.Interact.SetCDMedia |
... | Interaction > Configure floppy media | VirtualMachine.Interact.SetFloppyMedia |
... | Interaction > Connect devices | VirtualMachine.Interact.DeviceConnection |
... | Interaction > Inject USB HID scan codes | VirtualMachine.Interact.PutUsbScanCodes |
... | Interaction > Power off | VirtualMachine.Interact.PowerOff |
... | Interaction > Power on | VirtualMachine.Interact.PowerOn |
... | Provisioning > Create template from virtual machine | VirtualMachine.Provisioning.CreateTemplateFromVM |
... | Provisioning > Mark as template | VirtualMachine.Provisioning.MarkAsTemplate |
... | Provisioning > Mark as virtual machine | VirtualMachine.Provisioning.MarkAsVM |
... | State > Create snapshot | VirtualMachine.State.CreateSnapshot |
If you would like to automate the creation of the custom vSphere role, a Terraform example is included in the project.
-
Navigate to the directory for the example.
cd terraform/vsphere-role
-
Duplicate the
terraform.tfvars.example
file toterraform.tfvars
in the directory.cp terraform.tfvars.example terraform.tfvars
-
Open the
terraform.tfvars
file and update the variables according to your environment. -
Initialize the current directory and the required Terraform provider for VMware vSphere.
terraform init
-
Create a Terraform plan and save the output to a file.
terraform plan -out=tfplan
-
Apply the Terraform plan.
terraform apply tfplan
Once the custom vSphere role is created, assign Global Permissions in vSphere for the service account used for the HashiCorp Packer to VMware vSphere integration. Global permissions are required for the content library. For example:
- Log in to the vCenter Server at <management_vcenter_server_fqdn>/ui as
[email protected]
. - Select Menu > Administration.
- In the left pane, select Access control > Global permissions and click the Add permissions icon.
- In the Add permissions dialog box, enter the service account (e.g., [email protected]), select the custom role (e.g., Packer to vSphere Integration Role) and the Propagate to children check box, and click OK.
In an environment with many vCenter Server instances, such as management and workload domains, you may wish to further reduce the scope of access across the infrastructure in vSphere for the service account. For example, if you do not want Packer to have access to your management domain, but only allow access to workload domains:
-
From the Hosts and clusters inventory, select management domain vCenter Server to restrict scope, and click the Permissions tab.
-
Select the service account with the custom role assigned and click the Change role icon.
-
In the Change role dialog box, from the Role drop-down menu, select No Access, select the Propagate to children check box, and click OK.
The variables are defined in .pkvars.hcl
files.
Run the config script ./config.sh
to copy the .pkvars.hcl.example
files to the config
directory.
The config
folder is the default folder, You may override the default by passing an alternate value as the first argument.
./config.sh foo
./build.sh foo
For example, this is useful for the purposes of running machine image builds for different environment.
San Francisco: us-west-1
./config.sh config/us-west-1
./build.sh config/us-west-1
Los Angeles: us-west-2
./config.sh config/us-west-2
./build.sh config/us-west-2
Edit the config/build.pkvars.hcl
file to configure the following:
- Credentials for the default account on machine images.
Example: config/build.pkvars.hcl
build_username = "rainpole"
build_password = "<plaintext_password>"
build_password_encrypted = "<sha512_encrypted_password>"
build_key = "<public_key>"
You can also override the build_key
value with contents of a file, if required.
For example:
build_key = file("${path.root}/config/ssh/build_id_ecdsa.pub")
Generate a SHA-512 encrypted password for the build_password_encrypted
using tools like mkpasswd.
Example: mkpasswd using Docker on Photon:
rainpole@photon> sudo systemctl start docker
rainpole@photon> sudo docker run -it --rm alpine:latest
mkpasswd -m sha512
Password: ***************
[password hash]
rainpole@photon> sudo systemctl stop docker
Example: mkpasswd using Docker on macOS:
rainpole@macos> docker run -it --rm alpine:latest
mkpasswd -m sha512
Password: ***************
[password hash]
Example: mkpasswd on Ubuntu:
rainpole@ubuntu> mkpasswd -m sha-512
Password: ***************
[password hash]
Generate a public key for the build_key
for public key authentication.
Example: macOS and Linux.
rainpole@macos> cd .ssh/
rainpole@macos ~/.ssh> ssh-keygen -t ecdsa -b 521 -C "[email protected]"
Generating public/private ecdsa key pair.
Enter file in which to save the key (/Users/rainpole/.ssh/id_ecdsa):
Enter passphrase (empty for no passphrase): **************
Enter same passphrase again: **************
Your identification has been saved in /Users/rainpole/.ssh/id_ecdsa.
Your public key has been saved in /Users/rainpole/.ssh/id_ecdsa.pub.
The content of the public key, build_key
, is added the key to the .ssh/authorized_keys
file of the build_username
on the guest operating system.
Warning
Replace the default public keys and passwords. By default, both Public Key Authentication and Password Authentication are enabled for Linux distributions. If you wish to disable Password Authentication and only use Public Key Authentication, comment or remove the portion of the associated Ansible
configure
role.
Edit the config/ansible.pkvars.hcl
file to configure the following:
- Credentials for the Ansible account on Linux machine images.
Example: config/ansible.pkvars.hcl
ansible_username = "ansible"
ansible_key = "<public_key>"
Note
A random password is generated for the Ansible user.
You can also override the ansible_key
value with contents of a file, if required.
For example:
ansible_key = file("${path.root}/config/ssh/ansible_id_ecdsa.pub")
Edit the config/common.pkvars.hcl
file to configure the following common variables:
- Virtual Machine Settings
- Template and Content Library Settings
- Removable Media Settings
- Boot and Provisioning Settings
Example: config/common.pkvars.hcl
// Virtual Machine Settings
common_vm_version = 19
common_tools_upgrade_policy = true
common_remove_cdrom = true
// Template and Content Library Settings
common_template_conversion = false
common_content_library_name = "sfo-w01-lib01"
common_content_library_ovf = true
common_content_library_destroy = true
// Removable Media Settings
common_iso_datastore = "sfo-w01-cl01-ds-nfs01"
// Boot and Provisioning Settings
common_data_source = "http"
common_http_ip = null
common_http_port_min = 8000
common_http_port_max = 8099
common_ip_wait_timeout = "20m"
common_shutdown_timeout = "15m"
http
is the default provisioning data source for Linux machine image builds.
If iptables is enabled on your Packer host, you will need to open common_http_port_min
through common_http_port_max
ports.
Example: Open a port range in iptables.
iptables -A INPUT -p tcp --match multiport --dports 8000:8099 -j ACCEPT`
You can change the common_data_source
from http
to disk
to build supported Linux machine images without the need to use Packer's HTTP server. This is useful for environments that may not be able to route back to the system from which Packer is running.
The cd_content
option is used when selecting disk
unless the distribution does not support a secondary CD-ROM. For distributions that do not support a secondary CD-ROM the floppy_content
option is used.
common_data_source = "disk"
If you need to define a specific IPv4 address from your host for Packer's HTTP Server, modify the common_http_ip
variable from null
to a string
value that matches an IP address on your Packer host. For example:
common_http_ip = "172.16.11.254"
Edit the config/proxy.pkvars.hcl
file to configure the following:
- SOCKS proxy settings used for connecting to Linux machine images.
- Credentials for the proxy server.
Example: config/proxy.pkvars.hcl
communicator_proxy_host = "proxy.rainpole.io"
communicator_proxy_port = 1080
communicator_proxy_username = "rainpole"
communicator_proxy_password = "<plaintext_password>"
Edit the config/redhat.pkvars.hcl
file to configure the following:
- Credentials for your Red Hat Subscription Manager account.
Example: config/redhat.pkvars.hcl
rhsm_username = "rainpole"
rhsm_password = "<plaintext_password>"
These variables are only used if you are performing a Red Hat Enterprise Linux Server build and are used to register the image with Red Hat Subscription Manager during the build for system updates and package installation. Before the build completes, the machine image is unregistered from Red Hat Subscription Manager.
Edit the builds/vsphere.pkvars.hcl
file to configure the following:
- vSphere Endpoint and Credentials
- vSphere Settings
Example: config/vsphere.pkvars.hcl
vsphere_endpoint = "sfo-w01-vc01.sfo.rainpole.io"
vsphere_username = "[email protected]"
vsphere_password = "<plaintext_password>"
vsphere_insecure_connection = true
vsphere_datacenter = "sfo-w01-dc01"
vsphere_cluster = "sfo-w01-cl01"
vsphere_datastore = "sfo-w01-cl01-ds-vsan01"
vsphere_network = "sfo-w01-seg-dhcp"
vsphere_folder = "sfo-w01-fd-templates"
If you prefer not to save sensitive potentially information in cleartext files, you add the variables to environmental variables using the included set-envvars.sh
script:
rainpole@macos> . ./set-envvars.sh
Note
You need to run the script as source or the shorthand "
.
".
Edit the *.auto.pkvars.hcl
file in each builds/<type>/<build>
folder to configure the following virtual machine hardware settings, as required:
-
CPU Sockets
(int)
-
CPU Cores
(int)
-
Memory in MB
(int)
-
Primary Disk in MB
(int)
-
.iso Path
(string)
-
.iso File
(string)
-
.iso Checksum Type
(string)
-
.iso Checksum Value
(string)
Note All
variables.auto.pkvars.hcl
default to using the VMware Paravirtual SCSI controller and the VMXNET 3 network card device types.
If required, modify the configuration files for the Linux distributions and Microsoft Windows.
Username and password variables are passed into the kickstart or cloud-init files for each Linux distribution as Packer template files (.pkrtpl.hcl
) to generate these on-demand. Ansible roles are then used to configure the Linux machine image builds.
Variables are passed into the Microsoft Windows unattend files (autounattend.xml
) as Packer template files (autounattend.pkrtpl.hcl
) to generate these on-demand. By default, each unattended file is set to use the KMS client setup keys as the Product Key.
PowerShell scripts are used to configure the Windows machine image builds.
Need help customizing the configuration files?
-
VMware Photon OS - Read the Photon OS Kickstart Documentation.
-
Ubuntu Server - Install and run system-config-kickstart on a Ubuntu desktop.
sudo apt-get install system-config-kickstart ssh -X rainpole@ubuntu-desktop sudo system-config-kickstart
-
Red Hat Enterprise Linux (as well as CentOS Linux/Stream, AlmaLinux OS, and Rocky Linux) - Use the Red Hat Kickstart Generator.
-
Microsoft Windows - Use the Microsoft Windows Answer File Generator if you need to customize the provided examples further.
Save a copy of your PEM encoded Root Certificate Authority certificate to the following in .cer
format.
/ansible/roles/base/files
for Linux machine images./certificates
for Windows machine images.
These files are copied to the guest operating systems and added the certificate to the Trusted Certificate Authority of the guest operating system.
Linux distributions uses the Ansible provisioner, but Windows still uses the shell provisioner at this time.
Start a build by running the build script (./build.sh
). The script presents a menu the which simply calls Packer and the respective build(s).
You can also start a build based on a specific source for some of the virtual machine images.
For example, if you simply want to build a Microsoft Windows Server 2022 Standard Core, run the following:
Initialize the plugins:
rainpole@macos> packer init builds/windows/server/2022/.
Build a specific machine image:
rainpole@macos> packer build -force \
--only vsphere-iso.windows-server-standard-core \
-var-file="config/vsphere.pkrvars.hcl" \
-var-file="config/build.pkrvars.hcl" \
-var-file="config/common.pkrvars.hcl" \
builds/windows/server/2022
You can set your environment variables if you would prefer not to save sensitive information in cleartext files.
You can add these to environmental variables using the included set-envvars.sh
script.
rainpole@macos> . ./set-envvars.sh
Note
You need to run the script as source or the shorthand "
.
".
Initialize the plugins:
rainpole@macos> packer init builds/windows/server/2022/.
Build a specific machine image using environmental variables:
rainpole@macos> packer build -force \
--only vsphere-iso.windows-server-standard-core \
builds/windows/server/2022
The build script (./build.sh
) can be generated using a template (./build.tmpl
) and a configuration file in YAML (./build.yaml
).
Generate a custom build script:
rainpole@macos> gomplate -c build.yaml -f build.tmpl -o build.sh
Happy building!!!
- Read Debugging Packer Builds.
-
Owen Reynolds @OVDamn
VMware Tools for Windows installation PowerShell script.