While Alma manages the metadata and workflows related to print and electronic resources, digital resources are unique in that the objects themselves are stored in Alma. Alma provides library staff with the ability to manage digital workflows and objects through its user interface, hiding the complexity of cloud storage, and offers integrated resource delivery to patrons via Primo.
The nature of digital resource management often requires deep integration with other systems. In some cases, the out-of-the-box tools are not sufficient to meet all of an institution’s integration requirements. In the sections that follow, we will document the digital storage architecture, along with integrations points for ingest and access scenarios.
Alma uses the Amazon Web Services (AWS) Simple Storage Solution (S3) cloud storage service as the backend for digital resources stored in Alma. Amazon S3 provides a highly durable storage infrastructure where the digital objects are stored across multiple devices for redundancy. Amazon S3 regularly verifies the integrity of digital objects stored using checksums. If Amazon S3 detects data corruption, it is repaired using redundant data. In addition, Amazon S3 calculates checksums on all network traffic to detect corruption of data packets when storing or retrieving data. Ex Libris further protects the digital objects by utilizing Amazon S3’s versioning capability, allowing customers to request to restore of digital objects for up to 90 days from a previous version.
S3 is deeply integrated into Alma workflows, so there’s no need to interact with it directly from the Alma user interface. Those institutions that require access to the files stored in Alma have the option of using third party tools to communicate directly with the S3 service. Several third party tools are available, including:
- AWS command line interface
- S3 Browser, an FTP-like interface
- Alma Digital Uploader, a stand-alone 3rd party upload tool customized for Alma
Within the S3 bucket, Alma provisions directories for each institution. The top level directory within the bucket is for the institution code, for example 01UNI_INST. This separation ensures data segregation as customers only have access to materials under their home directory. Under the institution code are two additional directories, one for upload and one for storage. The upload directory is where materials can be staged before being processed by Alma. The access key described above provides read/write access to this directory. The storage directory is where Alma stores files which have been added as digital inventory in Alma. The institution’s access key provides read-only access to the storage directory, as file management must be done in Alma.
Institutions can use their sandbox environments to test digital workflows and data. Sandbox governance and storage limits are stipulated in the Knowledge Center.
S3 Regions and Buckets
Digital resources in Alma are stored in one of three AWS regions, depending on the location of the Alma data center. In each region, a “bucket”, or S3 storage location, is defined.
Below are the AWS regions and bucket names defined for each Alma data center:
|North America – Production||North America – Sandbox|
|Canada – Production||Canada – Sandbox|
|Europe – Production||Europe – Sandbox|
|Asia Pacific – Production||Asia Pacific – Sandbox|
|China – Production||China – Sandbox|
To directly access files in S3, an access key and secret are needed. Access keys are managed in Alma from Resource Management > Resource Configuration > Digital Storage. Each institution can create up to two access key and secret pairs. Once issued, the secret cannot be recovered, so be sure to keep it safe. If needed, an access key can be revoked and a new one issued.
The access key and secret are used for direct access to the S3 storage area allocated to your institution using third party tools like those mentioned above. Access keys are per institution; credentials generated via the sandbox environment are valid for the production environment (and vice versa). Be sure to access the correct bucket and paths to avoid permissions errors.