Note: The following column headings are used in Table A-1 on page 390:
•Event Code : Unique event code as shown by the GUI of event_list command
•Category : Category for the Event code
•Cause : Cause/Trigger for the Event
•Action : Gives a brief hint if an action must be taken and which component or system state should be analyzed:
 – None : Informational event only; started by user command or an OK state is reported
 – Contact IBM : The event should be reviewed by checking IBM Knowledge Center or by opening a problem management record (PMR)
 – Component : Any other component involved, such as:
 • Spectrum Accelerate module
 • Disk
 • Service or the system state
 • Hosting ESX/VM server, management, interconnect, or iSCSI host network
 • Power environment
 • Networking environment
 • Mirror or migration targets
 • Host servers
 • Firewall setup
•Event Text: The detailed text of the event as described in the Description column of the Events view.
|
Event code
|
Category
|
Cause
|
Action
|
Event text
|
ACCESS_OF_USER_GROUP_TO_CLUSTER_REMOVED
|
USER ACCESS
|
USER
|
NONE
|
Access of User group '{User Group Name}' to cluster '{Cluster Name}' was removed.
|
ACCESS_OF_USER_GROUP_TO_HOST_REMOVED
|
USER ACCESS
|
USER
|
NONE
|
Access of User group '{User Group Name}' to host '{Host Name}' was removed.
|
ACCESS_TO_CLUSTER_GRANTED_TO_USER_GROUP
|
USER ACCESS
|
USER
|
NONE
|
User group '{User Group Name}' was granted access to cluster '{Cluster Name}'.
|
ACCESS_TO_HOST_GRANTED_TO_USER_GROUP
|
USER ACCESS
|
USER
|
NONE
|
User group '{User Group Name}' was granted access to host '{Host Name}'.
|
APPADMIN_CAPABILITIES_SET
|
USER ACCESS
|
USER
|
NONE
|
Application admin capabilities have been set to {Capabilities}
|
AUDIT_DISABLED
|
SECURITY
|
USER
|
NONE
|
CLI command auditing deactivated.
|
AUDIT_ENABLED
|
SECURITY
|
USER
|
NONE
|
CLI command auditing activated.
|
BOIDEM_DISK_ABNORMAL_ERROR
|
BOIDEM DISK
|
HARDWARE
|
Module / ESX DATASTORE
|
Unit attentions or aborts in the last 30 minutes on {Disk ID}, start lba={start_lba}, last lba={last_lba}, command={command}, latency={latency} ms.
|
BOIDEM_DISK_DEFERRED_ERROR
|
BOIDEM DISK
|
HARDWARE
|
Module / ESX DATASTORE
|
Deferred error on {Disk ID}, start LBA={Start LBA}, last LBA={Last LBA}, latency={latency} ms, key={key}
|
BOIDEM_DISK_ERROR_SENSE_INFORMATION
|
BOIDEM DISK
|
SOFTWARE
|
Module / ESX DATASTORE
|
Disk {Disk ID} had sense information indicating an error: {Sense Key Number}/{Sense Code Number 1}/{Sense Code Number 2} (FRU={FRU Code}) {Sense Key} - {Sense Code}.
|
BOIDEM_DISK_KEEPALIVE_FAILED
|
BOIDEM DISK
|
HARDWARE
|
Module / ESX DATASTORE
|
Disk {Disk ID} is not responding to keepalives of type {Type} for {Time from last success ms}ms
|
BOIDEM_DISK_KEEPALIVE_OK
|
BOIDEM DISK
|
SOFTWARE
|
Module / ESX DATASTORE
|
Disk {Disk ID} is responding to keepalives of type {Type} after {Time from last success ms}ms
|
BOIDEM_DISK_KILLED
|
BOIDEM DISK
|
SOFTWARE
|
Module / ESX DATASTORE
|
Boidem disk {Disk ID} killed.
|
BOIDEM_DISK_LONG_LATENCY
|
BOIDEM DISK
|
HARDWARE
|
Module / ESX DATASTORE
|
Disk {Disk ID} has been exhibiting long I/O latency in the last 30 minutes, start LBA={Start LBA}, last LBA={Last LBA}, command={command}, latency={latency} ms.‘
|
BOIDEM_DISK_MEDIUM_ERROR
|
BOIDEM DISK
|
HARDWARE
|
Module / ESX DATASTORE
|
Media errors on {Disk ID}, start LBA={Start LBA}, last LBA={Last LBA}, latency={latency} ms.
|
BOIDEM_DISK_RESPONSIVE
|
BOIDEM DISK
|
HARDWARE
|
Module / ESX DATASTORE
|
Disk {Disk ID} is now responsive. Was unresponsive for {unresponsive_time} ms
|
BOIDEM_DISK_REVIVED
|
BOIDEM DISK
|
HARDWARE
|
Module / ESX DATASTORE
|
Boidem disk {Disk ID} revived.
|
BOIDEM_DISK_UNRESPONSIVE
|
BOIDEM DISK
|
HARDWARE
|
Module / ESX DATASTORE
|
Disk {Disk ID} is unresponsive for {time} ms
|
BOIDEM_FS_IS_RO
|
BOIDEM DISK
|
SOFTWARE
|
Module / ESX DATASTORE
|
Boidem mount poInt {Read-only mount poInt} is in a read-only state on module {module}.
|
BOIDEM_MISSING_MOUNT_POINT
|
BOIDEM DISK
|
SOFTWARE
|
Module / ESX DATASTORE
|
Boidem is missing a mount poInt at {Missing mount poInt} on module {module}.
|
BULK_EMAIL_HAS_FAILED
|
ALERTS
|
ENVIRONMENT
|
SMTP Gateway / Network
|
Sending bulk email with {Events Number} events to {Destination List} via {SMTP Gateway} failed. Module: {Module ID}; Error message: '{Error Message}'; timeout expired: {Timeout Expired?}.
|
CACHE_HAS_LESS_MEMORY
|
CACHE
|
HARDWARE
|
Module DIMM
|
Data module has less memory than expected. node={node} - {gb_missing} GB missing.
|
CERTIFICATE_REMOVED
|
SECURITY
|
USER
|
NONE
|
The certificate named '{name}' was removed.
|
CG_MIRROR_CREATE
|
MIRROR
|
USER
|
NONE
|
A remote mirror was defined for Consistency Group '{local CG name}'on Target '{target name}'. Remote Consistency Group is '{remote CG name}'.
|
CG_MIRROR_CREATE_SLAVE
|
MIRROR
|
USER
|
NONE
|
A remote mirror was defined by Target '{target name}' for CG '{local CG name}'. Remote CG is '{remote CG name}'.
|
CLUSTER_ADD_HOST
|
CLUSTER
|
USER
|
NONE
|
Host with name '{host.name}' was added to Cluster with name '{cluster.name}'.
|
CLUSTER_CANCEL_EXCEPTION
|
CLUSTER
|
USER
|
NONE
|
LUN '{LUN}' was defined as having uniform mapping in cluster '{cluster}'.
|
CLUSTER_CREATE
|
CLUSTER
|
USER
|
NONE
|
Cluster was defined with name '{cluster.name}'.
|
CLUSTER_CREATE_FAILED_TOO_MANY
|
CLUSTER
|
USER
|
NONE
|
Cluster with name '{name}' could not be defined. You are attempting to define more Clusters than the system permits.
|
CLUSTER_DEFINE_EXCEPTION
|
CLUSTER
|
USER
|
NONE
|
LUN '{LUN}' was defined as having host specific mapping in cluster '{cluster}'.
|
CLUSTER_DELETE
|
CLUSTER
|
USER
|
NONE
|
Cluster with name '{cluster.name}' was deleted.
|
CLUSTER_REMOVE_HOST
|
CLUSTER
|
USER
|
NONE
|
Host with name '{host.name}' was removed from Cluster with name '{cluster.name}'.
|
CLUSTER_RENAME
|
CLUSTER
|
USER
|
NONE
|
Cluster with name '{old_name}' was renamed '{cluster.name}'.
|
COMPONENT_FAILURE_WAS_CANCELED
|
COMPONENT
|
SOFTWARE
|
NONE
|
Component {Component ID} failure status was reset.
|
COMPONENT_FIRMWARE_CANNOT_FAIL_COMPONENT
|
COMPONENT
|
SOFTWARE
|
Component
|
Cannot fail {Component ID}: {Error}. Firmware upgrade result was: {Upgrade result}.
|
COMPONENT_FIRMWARE_CANNOT_PHASEOUT_COMPONENT
|
COMPONENT
|
SOFTWARE
|
Component
|
Cannot phase out {Component ID}: {Error}. Firmware upgrade result was: {Upgrade result}.
|
COMPONENT_FIRMWARE_UPGRADE_ABORTED
|
COMPONENT
|
SOFTWARE
|
Component
|
Aborted {Upgrade type} upgrade of {Firmware type} firmware, version {Label}, on {Scope}. Abort reason: {Reason}. Progress {Attempted}/{Total}, {Successes} succeeded, {Failures} failed, {No-Ops} no-ops.
|
COMPONENT_FIRMWARE_UPGRADE_ABORTING
|
COMPONENT
|
SOFTWARE
|
Component
|
Aborting {Upgrade type} upgrade of {Firmware type} firmware, version {Label}, on {Scope}. Abort reason: {Reason}. Waiting for current upgrade item to complete.
|
COMPONENT_FIRMWARE_UPGRADE_DONE
|
COMPONENT
|
SOFTWARE
|
Component
|
Finished {Upgrade type} upgrade of {Firmware type} firmware, version {Label}, on {Scope}. {Successes} succeeded, {Failures} failed, {No-Ops} no-ops.
|
COMPONENT_FIRMWARE_UPGRADE_STARTED
|
COMPONENT
|
SOFTWARE
|
Component
|
Starting {Upgrade type} upgrade of {Firmware type} firmware, version {Label}, on {Scope}.
|
COMPONENT_FRU_REJECTED
|
COMPONENT
|
SOFTWARE
|
Component
|
{Component ID} - Failed FRU validation.
|
COMPONENT_NETWORK_LINK_IS_DOWN
|
COMPONENT
|
HARDWARE
|
Network
|
Network Interface to {Connected Component} on {Component ID} - link disconnected.
|
COMPONENT_NETWORK_LINK_IS_UP
|
COMPONENT
|
SOFTWARE
|
NONE
|
Network Interface to component {Connected Component} on {Component ID} - link regained.
|
COMPONENT_REQUIRED_SERVICE_CLEARED
|
COMPONENT
|
SOFTWARE
|
NONE
|
Component {Component ID} does NOT require service anymore
|
COMPONENT_REQUIRES_IMMEDIATE_SERVICING
|
COMPONENT
|
SOFTWARE
|
Component
|
Component {Component ID} which previously had it's service deferred now requires immediate service: {Component Required Service}, due to: {Component Service Reason}
|
COMPONENT_REQUIRES_SERVICING
|
COMPONENT
|
SOFTWARE
|
Component
|
Component {Component ID} requires service: {Component Required Service}, due to: {Component Service Reason}. The urgency of this service is {MaIntenance Urgency}
|
COMPONENT_TEMPERATURE_IS_ABNORMALLY_HIGH
|
COMPONENT
|
HARDWARE
|
Temperature
|
{Component ID} temperature is {temperature}C. The temperature is abnormally high.
|
COMPONENT_TEMPERATURE_IS_ABNORMALLY_HIGH_AND_DROPPING
|
COMPONENT
|
HARDWARE
|
Temperature
|
{Component ID} temperature is {temperature}C. The temperature is dropping, but still abnormally high.
|
COMPONENT_TEMPERATURE_IS_ABNORMALLY_HIGH_AND_STABILIZING
|
COMPONENT
|
HARDWARE
|
Temperature
|
{Component ID} temperature is {temperature}C. The temperature is stabilizing, but still abnormally high.
|
COMPONENT_TEMPERATURE_IS_EXTREMELY_HIGH
|
COMPONENT
|
HARDWARE
|
Temperature
|
{Component ID} temperature is {temperature}C. The temperature is extremely high. The component may immediately fail and permanent damage may occur.
|
COMPONENT_TEMPERATURE_IS_DROPPING
|
COMPONENT
|
HARDWARE
|
Temperature
|
{Component ID} temperature is {temperature}C. Temperature is dropping.
|
COMPONENT_TEMPERATURE_IS_HIGH
|
COMPONENT
|
HARDWARE
|
Temperature
|
{Component ID} temperature is {temperature}C. The temperature is high.
|
COMPONENT_TEMPERATURE_IS_HIGH_AND_DROPPING
|
COMPONENT
|
HARDWARE
|
Temperature
|
{Component ID} temperature is {temperature}C. The temperature is dropping, but still high.
|
COMPONENT_TEMPERATURE_IS_HIGH_AND_STABILIZING
|
COMPONENT
|
HARDWARE
|
Temperature
|
{Component ID} temperature is {temperature}C. The temperature is stabilizing, but still high.
|
COMPONENT_TEMPERATURE_IS_NORMAL
|
COMPONENT
|
HARDWARE
|
NONE
|
{Component ID} temperature is {temperature}C. The temperature is normal.
|
COMPONENT_TEMPERATURE_IS_RISING
|
COMPONENT
|
HARDWARE
|
Temperature
|
{Component ID} temperature is {temperature}C. Temperature is rising.
|
COMPONENT_TEMPERATURE_IS_STABILIZING
|
COMPONENT
|
HARDWARE
|
Temperature
|
{Component ID} temperature is {temperature}C. Temperature is stabilizing.
|
COMPONENT_TEMPERATURE_IS_VERY_HIGH
|
COMPONENT
|
HARDWARE
|
Temperature
|
{Component ID} temperature is {temperature}C. The temperature is very high and may effect on component performance or even damage it.
|
COMPONENT_TEMPERATURE_IS_VERY_HIGH_AND_DROPPING
|
COMPONENT
|
HARDWARE
|
Temperature
|
{Component ID} temperature is {temperature}C. The temperature is dropping, but still very high.
|
COMPONENT_TEMPERATURE_IS_VERY_HIGH_AND_STABILIZING
|
COMPONENT
|
HARDWARE
|
Temperature
|
{Component ID} temperature is {temperature}C. The temperature is stabilizing, but still very high.
|
COMPONENT_TEST_HAS_FAILED
|
COMPONENT
|
USER
|
Component
|
Test of {Component ID} has failed. Failure reason: {Failure Reason}.
|
COMPONENT_TEST_OF_DISK_HAS_FAILED
|
COMPONENT
|
USER
|
Disk
|
Test of {Component ID} has failed with error {Error}.
|
COMPONENT_TEST_OF_SSD_HAS_FAILED
|
COMPONENT
|
USER
|
SSD
|
Test of {Component ID} has failed with error {Error}.
|
COMPONENT_TEST_SUCCEEDED
|
COMPONENT
|
USER
|
NONE
|
Test of {Component ID} succeeded.
|
COMPONENT_WAS_EQUIPPED
|
COMPONENT
|
USER
|
NONE
|
{Component ID} was equipped.
|
COMPONENT_WAS_FAILED
|
COMPONENT
|
SOFTWARE
|
Component
|
Component {Component ID} was marked as failed.
|
COMPONENT_WAS_PHASED_IN
|
COMPONENT
|
USER
|
NONE
|
{Component ID} was phased-in.
|
COMPONENT_WAS_PHASED_OUT
|
COMPONENT
|
USER / SOFTWARE
|
Component
|
{Component ID} was phased-out.
|
COMPONENT_WAS_UNEQUIPPED
|
COMPONENT
|
USER
|
NONE
|
{Component ID} was unequipped.
|
CONNECTED_HOSTS_LIMIT_REACHED
|
HOST
|
SOFTWARE
|
Number of hosts
|
Number of connected Hosts was reached for port '{port_id}' in Module {Module Id}.
|
CONS_GROUP_ADD_VOLUME
|
CONSISTENCY GROUP
|
USER
|
NONE
|
Volume with name '{volume.name}' was added to Consistency Group with name '{cg.name}'.
|
CONS_GROUP_CREATE
|
CONSISTENCY GROUP
|
USER
|
NONE
|
Consistency Group with name '{cg.name}' was created.
|
CONS_GROUP_CREATE_FAILED_TOO_MANY
|
CONSISTENCY GROUP
|
SOFTWARE
|
Number of CGs
|
Consistency Group with name '{name}' could not be created. You are attempting to add more Consistency Groups than the system permits.
|
CONS_GROUP_DELETE
|
CONSISTENCY GROUP
|
USER
|
NONE
|
Consistency Group with name '{cg.name}' was deleted.
|
CONS_GROUP_GROUPED_POOL_MOVE
|
CONSISTENCY GROUP
|
USER
|
NONE
|
Consistency Group with name '{cg.name}' has been moved from Grouped Pool '{orig_gp.name}' to Grouped Pool '{gp.name}'.
|
CONS_GROUP_MODIFIED_DURING_IO_PAUSE
|
CONSISTENCY GROUP
|
USER
|
NONE
|
CG '{cg_name}' was modified during Pause IO with token '{token}'
|
CONS_GROUP_MOVE
|
CONSISTENCY GROUP
|
USER
|
NONE
|
Consistency Group with name '{cg.name}' has been moved from Storage Pool '{orig_pool.name}' to Pool '{pool.name}'.
|
CONS_GROUP_REMOVE_VOLUME
|
CONSISTENCY GROUP
|
USER
|
NONE
|
Volume with name '{volume.name}' was removed from Consistency Group with name '{cg.name}'.
|
CONS_GROUP_RENAME
|
CONSISTENCY GROUP
|
USER
|
NONE
|
Consistency Group with name '{old_name}' was renamed '{cg.name}'.
|
CONS_GROUP_SNAPSHOTS_CREATE
|
CONSISTENCY GROUP
|
USER
|
NONE
|
Snapshot Group for Consistency Group with name '{cg.name}' was created with name '{cs_name}'.
|
CONS_GROUP_SNAPSHOTS_CREATE_FAILED_TOO_MANY
|
CONSISTENCY GROUP
|
SOFTWARE
|
Number of CG Snapshots
|
Snapshot Group for Consistency Group '{cg.name}' could not be created. You are attempting to add more Volumes than the system permits.
|
CONS_GROUP_SNAPSHOTS_OVERWRITE
|
CONSISTENCY GROUP
|
USER
|
NONE
|
Snapshot Group named '{cs_name}' was overridden for Consistency Group with name '{cg.name}'.
|
CPU_FAILED
|
HARDWARE
|
HARDWARE
|
Module's CPU
|
{Component ID} has failed. Hardware status: {Status}.
|
CR_BYPASS_ACCESS
|
SECURITY
|
USER
|
NONE
|
{Command that bypasses CR mechanism} access to '{Unix Account Name}' account on module '{Component ID}' from '{IP Address}'.
|
CR_KEY_SETUP_FAILED
|
SECURITY
|
USER
|
NONE
|
Failed to set challenge-response key on module '{Component ID}'.
|
CR_KEY_SETUP_OK
|
SECURITY
|
USER
|
NONE
|
Challenge-response key was successfully set on all modules in the system.
|
CR_KEY_UPGRADE_NOT_DONE
|
SECURITY
|
USER
|
NONE
|
Challenge-response key was not upgraded on the system since a valid key has been previously set.
|
CUSTOM_EVENT
|
PROACTIVE SUPPORT
|
USER
|
Verify if PMR generated
|
{Description}
|
DATA_REBUILD_COMPLETED
|
DATA
|
SOFTWARE
|
NONE
|
Rebuild process completed. System data is now protected.
|
DATA_REBUILD_COMPLETED_REDIST_STARTED
|
DATA
|
SOFTWARE
|
NONE
|
Rebuild process completed. System data is now protected. Starting data transfer to new disks.
|
DATA_REBUILD_COULD_NOT_BE_COMPLETED
|
DATA
|
SOFTWARE
|
Contact IBM
|
Rebuild process could not be completed due to insufficient unused disk space. System data is not protected.
|
DATA_REBUILD_STARTED
|
DATA
|
SOFTWARE
|
NONE
|
Rebuild process started because system data is not protected. {data_percent}% of the data must be rebuilt.
|
DATA_REDIST_COMPLETED
|
DATA
|
SOFTWARE
|
NONE
|
Completed data transfer to new disks.
|
DATA_REDIST_STARTED
|
DATA
|
SOFTWARE
|
NONE
|
Starting data transfer to new disks.
|
DATA_SERVICE_FINISHED_PHASEOUT
|
DATA
|
SOFTWARE
|
NONE
|
System finished phasing out {Component ID}.
|
DATA_SERVICE_STARTED_PHASEOUT
|
DATA
|
SOFTWARE
|
NONE
|
System started phasing out {Component ID}.
|
DESIGNATED_MSM_USER
|
INTEGRATION
|
USER
|
NONE
|
{Description}
|
DESTINATION_DEFINE
|
PROACTIVE SUPPORT / ALERTS
|
USER
|
NONE
|
Destination with name '{name}' was defined.
|
DESTINATION_DELETE
|
PROACTIVE SUPPORT / ALERTS
|
USER
|
NONE
|
Destination with name '{name}' was deleted.
|
DESTINATION_GROUP_ADD_DESTINATION
|
PROACTIVE SUPPORT / ALERTS
|
USER
|
NONE
|
Destination with name '{destination name}' was added to destination group '{destgroup name}'.
|
DESTINATION_GROUP_CREATE
|
PROACTIVE SUPPORT / ALERTS
|
USER
|
NONE
|
Destination Group with name '{name}' was created.
|
DESTINATION_GROUP_DELETE
|
PROACTIVE SUPPORT / ALERTS
|
USER
|
NONE
|
Destination Group with name '{name}' was deleted.
|
DESTINATION_GROUP_REMOVE_DESTINATION
|
PROACTIVE SUPPORT / ALERTS
|
USER
|
NONE
|
Destination with name '{destination name}' was removed from destination group '{destgroup name}'.
|
DESTINATION_GROUP_RENAME
|
PROACTIVE SUPPORT / ALERTS
|
USER
|
NONE
|
Destination Group with name '{old name}' was renamed '{new name}'.
|
DESTINATION_GROUP_UPDATE
|
PROACTIVE SUPPORT / ALERTS
|
USER
|
NONE
|
Destination Group with name '{name}' was updated.
|
DESTINATION_RENAME
|
PROACTIVE SUPPORT / ALERTS
|
USER
|
NONE
|
Destination with name '{old name}' was renamed '{new name}'.
|
DESTINATION_UPDATE
|
PROACTIVE SUPPORT / ALERTS
|
USER
|
NONE
|
Destination with name '{name}' was updated.
|
DIMM_CHANGE_DETECTED
|
DIMM
|
HARDWARE
|
NONE
|
{Component ID} has been changed from a serial of {old_serial} to {new_serial}.
|
DIMM_COMPLIANCE_CHECK_DIMM_FAILED
|
DIMM
|
HARDWARE
|
DIMM
|
DIMM in slot {DIMM id}, part number '{Part number}', on module {Component ID} does not conform to the specification.
|
DIMM_COMPLIANCE_CHECK_FAILED
|
DIMM
|
SOFTWARE
|
DIMM
|
Installed DIMMs on module {Component ID} do not conform to the specification: {Failure reason}
|
DIMM_CORRECTABLE_ERROR_DETECTED
|
DIMM
|
HARDWARE
|
DIMM
|
Memory correctable ECC errors were detected on {Module}, {Count} errors on DIMM channel {Channel}, position {Position}.
|
DIMM_ERRORS_PHASING_OUT_MODULE
|
DIMM
|
SOFTWARE
|
DIMM / MODULE
|
{Module} will be phased out as we detected too many DIMM errors there.
|
DIMM_FAILED
|
DIMM
|
HARDWARE
|
DIMM
|
{Component ID} has failed. Hardware status: {Status}.
|
DIMM_UNCORRECTABLE_ERROR_DETECTED
|
DIMM
|
HARDWARE
|
DIMM
|
Memory uncorrectable ECC errors were detected on {Module}, {Count} errors on DIMM channel {Channel}, position {Position}.
|
DISK_ABNORMAL_ERROR
|
DISK
|
HARDWARE
|
DISK
|
Unit attentions or aborts in the last 30 minutes on {Disk ID}, start lba={start_lba}, last lba={last_lba}, command={command}, latency={latency} ms.
|
DISK_BAD_PERFORMANCE
|
DISK
|
HARDWARE
|
DISK
|
Bad performance on {Disk ID}, I/O count={I/O Count}, transferred kbytes={kbytes},msecs={seconds}.
|
DISK_BLOCK_SIZE_IS_INVALID
|
DISK
|
SOFTWARE
|
DISK
|
{Component ID} was formatted with invalid block size of {Block Size}.
|
DISK_BMS_ERROR_DETECTED
|
DISK
|
HARDWARE
|
DISK
|
{Component ID} - BMS error detected: {Sense Key}/{Additional Sense Code}/{Additional Sense Code Qualifier} {Sense Key} - {Sense Code} (LBA: {LBA}).
|
DISK_CHANGE_WAS_DETECTED
|
DISK
|
HARDWARE
|
DISK
|
{Component ID} has been changed from a {Old Vendor}-{Old Model} with a serial of {Old Serial} and with a firmware of {Old Firmware} to a {New Vendor}-{New Model} with a serial of {New Serial} and with a firmware of {New Firmware}.
|
DISK_COMPONENT_TEST_STARTED
|
DISK
|
USER
|
NONE
|
Test of {Component ID} started.
|
DISK_DEFERRED_ERROR
|
DISK
|
HARDWARE
|
DISK
|
Deferred error on {Disk ID}, start LBA={Start LBA}, last LBA={Last LBA}, latency={latency} ms, key={key}
|
DISK_DOES_NOT_EXIST
|
DISK
|
HARDWARE
|
DISK
|
{Component ID} doesn't exist.
|
DISK_ERROR_SENSE_INFORMATION
|
DISK
|
HARDWARE
|
DISK
|
Disk {Disk ID} had sense information indicating an error: {Sense Key Number}/{Sense Code Number 1}/{Sense Code Number 2} (FRU={FRU Code}) {Sense Key} - {Sense Code}.
|
DISK_EXCESSIVE_BMS_ACTIVITY
|
DISK
|
HARDWARE
|
DISK
|
{Component ID} exhibits excessive BMS activity, fill time is {Time to fill BMS log} minutes.
|
DISK_FAILED_SHORT_STANDARD_TEST
|
DISK
|
HARDWARE
|
DISK
|
{Component ID} - Failed short standard test.
|
DISK_FINISHED_PHASEIN
|
DISK
|
SOFTWARE
|
NONE
|
System finished phasing in {Component ID}.
|
DISK_FINISHED_PHASEOUT
|
DISK
|
SOFTWARE
|
DISK
|
System finished phasing out {Component ID}.
|
DISK_GLIST_CHANGED
|
DISK
|
HARDWARE
|
DISK
|
Disk {Component ID} GLIST changed from {Previous glist size} to {Current glist Size}.
|
DISK_GLIST_SIZE_TOO_HIGH
|
DISK
|
HARDWARE
|
DISK
|
Disk {Component ID} GLIST size is {Glist Size}, which is too high.
|
DISK_HAS_FAILED
|
DISK
|
HARDWARE
|
DISK
|
Disk {Component ID} Failed.
|
DISK_HIGH_MEDIA_ERROR_RATE_CLEARED
|
DISK
|
SOFTWARE
|
NONE
|
{Component ID} no longer exhibits high media error rate.
|
DISK_HIGH_MEDIA_ERROR_RATE_DETECTED
|
DISK
|
HARDWARE
|
DISK
|
{Component ID} exhibits high media error rate of rule {rule_type} per {cycle_type}.
|
DISK_HIGH_READ_CORRECTED_WITH_DELAY_RATE
|
DISK
|
HARDWARE
|
DISK
|
Disk {Component ID} has {number of read corrected with delay} read corrected errors with delay rate {rate}.
|
DISK_INFO_EXTRA_EVENT
|
DISK
|
HARDWARE
|
DISK
|
Disk {Component ID} Extra information event.
|
DISK_INFO_LOAD_FAILED
|
DISK
|
HARDWARE
|
DISK
|
{Component ID} failed.
|
DISK_IS_NOW_OFFLINE
|
DISK
|
HARDWARE
|
DISK
|
Disk {Component ID} is now offline. It has been taken offline by the SCSI mid-layer.
|
DISK_LARGER_THAN_SYSTEM_DISK_SIZE
|
DISK
|
HARDWARE
|
DISK
|
Disk {Component ID} has a size of {New size}GB which is larger than system disk size {System size}GB.
|
DISK_LOG_PAGE_READING_FAILED
|
DISK
|
HARDWARE
|
DISK
|
Disk {Component ID} Failed reading log page. Opcode is {opcode}, page code is {page code}.
|
DISK_LONG_LATENCY
|
DISK
|
HARDWARE
|
DISK
|
Long latencies on disk I/Os in the last 30 minutes on {Disk ID}, start LBA={Start LBA}, last LBA={Last LBA}, command={command}, latency={latency} ms.
|
DISK_MEDIA_PRE_SCAN_OFF
|
DISK
|
SOFTWARE
|
NONE
|
{Component ID} - Disk media pre scan is OFF.
|
DISK_MEDIA_PRE_SCAN_ON
|
DISK
|
SOFTWARE
|
NONE
|
{Component ID} - Disk media pre scan is ON.
|
DISK_MEDIUM_ERROR
|
DISK
|
HARDWARE
|
DISK
|
Media errors on {Disk ID}, start LBA={Start LBA}, last LBA={Last LBA}, latency={latency} ms.
|
DISK_NEEDS_PHASEOUT
|
DISK
|
HARDWARE
|
DISK
|
{Disk ID} needs to be phased out.
|
DISK_POWER_DOWN
|
DISK
|
HARDWARE
|
DISK
|
Disk {Component ID} was powered-down due to error recovery failures.
|
DISK_PROBLEMATIC_BEHAVIOR_CLEARED
|
DISK
|
SOFTWARE
|
NONE
|
{Component ID} no longer exhibits problematic behavior.
|
DISK_PROBLEMATIC_BEHAVIOR_DETECTED
|
DISK
|
HARDWARE
|
DISK
|
{Component ID} exhibits problematic behavior.
|
DISK_RECOVERED
|
DISK
|
SOFTWARE
|
NONE
|
Disk {Component ID} is functioning again.
|
DISK_REQUEST_ERROR_INFORMATION
|
DISK
|
SOFTWARE
|
NONE
|
Disk {Disk ID} had error: {Error Name}, latency={latency} ms.
|
DISK_RESET_DONE
|
DISK
|
SOFTWARE
|
NONE
|
Reset to disk {Component ID} was executed and succeeded. Reset duration {reset duration} usecs, IOs pending {IOs Pending}.
|
DISK_RESET_FAILED
|
DISK
|
HARDWARE
|
DISK
|
Reset to disk {Component ID} has failed. Reset duration {reset duration}, IOs pending {IOs Pending}.
|
DISK_RESET_FAILURE
|
DISK
|
HARDWARE
|
DISK
|
Reset to disk {Component ID} was executed and failed. Reset duration {reset duration} usecs, IOs pending {IOs Pending}.
|
DISK_RESET_SUCCEEDED
|
DISK
|
SOFTWARE
|
NONE
|
Reset to disk {Component ID} succeeded. Reset duration {reset duration}, IOs pending {IOs Pending}.
|
DISK_RESET_WAS_SENT
|
DISK
|
SOFTWARE
|
NONE
|
A disk reset was sent to {Component ID}.
|
DISK_RESPONSIVE
|
DISK
|
HARDWARE
|
NONE
|
Disk {Disk ID} is now responsive. Was unresponsive for {unresponsive_time} msecs, cache dirty level is {Dirty Level}%
|
DISK_SHOULD_FAIL
|
DISK
|
HARDWARE
|
DISK
|
{Disk ID} is malfunctioning and should fail.
|
DISK_SMALLER_THAN_SYSTEM_DISK_SIZE
|
DISK
|
HARDWARE
|
DISK
|
Disk {Component ID} has a size of {New size}GB which is smaller than system disk size {System size}GB.
|
DISK_SMART_READING_FAILED
|
DISK
|
HARDWARE
|
NONE
|
{Component ID} - SMART reading failed.
|
DISK_SMART_READING_OK
|
DISK
|
SOFTWARE
|
NONE
|
{Component ID} - SMART reading OK.
|
DISK_SMART_STATUS_BAD
|
DISK
|
HARDWARE
|
NONE
|
{Component ID} - SMART status: Bad.
|
DISK_SMART_STATUS_GOOD
|
DISK
|
SOFTWARE
|
NONE
|
{Component ID} - SMART status: Good.
|
DISK_STARTED_AUTO_PHASEIN
|
DISK
|
SOFTWARE
|
NONE
|
System started phasing in {Component ID} in order to ensure that data will not be unprotected. Phaseout of the containing service and module has been cancelled.
|
DISK_STARTED_AUTO_PHASEOUT
|
DISK
|
SOFTWARE
|
DISK
|
System started automatic phasing out {Component ID}.
|
DISK_STARTED_PHASEIN
|
DISK
|
USER
|
NONE
|
System started phasing in {Component ID}.
|
DISK_STARTED_PHASEOUT
|
DISK
|
USER/ SOFTWARE
|
DISK
|
System started phasing out {Component ID}.
|
DISK_UNRESPONSIVE
|
DISK
|
HARDWARE
|
DISK
|
Disk {Disk ID} is unresponsive for {time} ms, cache dirty level is {Dirty Level}%
|
DISK_WAS_TURNED_OFF
|
DISK
|
SOFTWARE
|
DISK
|
Disk {Component ID} was turned off.
|
DISK_WAS_TURNED_ON
|
DISK
|
SOFTWARE
|
NONE
|
Disk {Component ID} was turned on.
|
DM_ACTIVATE
|
DATA MIGRATION
|
USER
|
NONE
|
Migration to Volume '{local volume name}' from Target '{target name}' was activated.
|
DM_CONNECTIVITY_TO_XIV_TARGET
|
DATA MIGRATION
|
USER
|
NONE
|
Gateway Node #{Node ID}: connection to {target name}:{target's connection index} DM connection was established, but being ignored because the remote end is an XIV target configured for mirroring, rather than a host
|
DM_DEACTIVATE
|
DATA MIGRATION
|
USER
|
NONE
|
Migration to Volume '{local volume name}' from Target '{target name}' was deactivated.
|
DM_DEACTIVATE_LUN_UNAVAILABLE
|
DATA MIGRATION
|
USER
|
NONE
|
Migration to Volume '{local volume name}' from Target '{target name}' was deactivated since LUN is not available on one of the active paths to the target.
|
DM_DEFINE
|
DATA MIGRATION
|
USER
|
NONE
|
Data Migration was defined to Volume '{local volume name}' from Target '{target name}'.
|
DM_DELETE
|
DATA MIGRATION
|
USER
|
NONE
|
Definition of Data Migration to Volume '{local volume name}' from Target '{target name}' was deleted.
|
DM_SYNC_ENDED
|
DATA MIGRATION
|
USER
|
NONE
|
Migration to volume '{local volume name}' from target '{target name}' is complete.
|
DM_SYNC_ENDED_WITH_ERRORS
|
DATA MIGRATION
|
USER
|
NONE
|
Migration to volume '{local volume name}' from target '{target name}' has completed with {medium_errors_in_data_migration} error(s). Check previous events related to this volume for the list of affected LBAs.'.
|
DM_SYNC_STARTED
|
DATA MIGRATION
|
USER
|
NONE
|
Migration to volume '{local volume name}' from Target '{target name}' has started.
|
DOMAIN_CREATED
|
DOMAIN
|
USER
|
NONE
|
Domain {domain_name} has been created.
|
DOMAIN_DELETED
|
DOMAIN
|
USER
|
NONE
|
Domain {domain_name} has been deleted.
|
DOMAIN_MANAGED_ATTRIBUTE_SET
|
DOMAIN
|
USER
|
NONE
|
Domain {domain_name} managed attribute was set to {managed_attribute}.
|
DOMAIN_POLICY_SET
|
DOMAIN
|
USER
|
NONE
|
Domain policy for {Parameter Name} set to '{Parameter Value}'
|
DOMAIN_RENAMED
|
DOMAIN
|
USER
|
NONE
|
Domain {old_name} has been renamed to {domain_name}.
|
DOMAIN_UPDATED
|
DOMAIN
|
USER
|
NONE
|
Domain {domain_name} has been updated.
|
DOMAINS_AUTO_SHIFT_RESOURCES
|
DOMAIN
|
USER
|
NONE
|
Resources from domain {domain_name} to domain {domain_name} have been auto shifted.
|
ELICENSE_ACCEPTED
|
ELICENSE
|
USER
|
NONE
|
Electronic license was accepted by '{Approver Name}'.
|
ELICENSE_VIOLATION
|
ELICENSE
|
USER
|
NONE
|
Latest version of the electronic license was not approved.
|
EMAIL_HAS_FAILED
|
PROACTIVE SUPPORT / ALERTS
|
ENVIRONMENT
|
Network / SMTPGW
|
Sending event {Event Code} ({Event Index}) to {Destination List} via {SMTP Gateway} failed. Module: {Module ID}; Error message: '{Error Message}'; timeout expired: {Timeout Expired?}.
|
EMAIL_NOT_SENT
|
PROACTIVE SUPPORT / ALERTS
|
ENVIRONMENT
|
Network / SMTPGW
|
Sending event {Event Code} ({Event Index}) to {Destination List} via {SMTP Gateway} was waived because of failed SMTP gateway. It will be not be used until {Retry Time}.
|
EMERGENCY_CONSOLE_ACCESS
|
SECURITY
|
USER
|
NONE
|
Emergency login to '{Unix Account Name}' account on module '{Component ID}' from tty '{TTY Device}'.
|
EMERGENCY_ROOT_ACCESS
|
SECURITY
|
USER
|
NONE
|
Emergency login to 'root' account on module '{Component ID}' from '{IP Address}' using key number '{Authorized Key Number}'.
|
EMERGENCY_SHUTDOWN_NOW
|
SYSTEM
|
USER
|
NONE
|
System is shutting down in emergency shutdown mode due to: {Emergency Shutdown Reason}.
|
GROUPED_POOL_CAPACITY_SHIFT
|
GROUPED POOL
|
USER
|
NONE
|
On Grouped Pool with name '{gp.name}' Capacity of {capacity_size}GB was shifted from pool '{src_pool.name}' to pool '{dest_pool.name}'.
|
GROUPED_POOL_CREATE
|
GROUPED POOL
|
USER
|
NONE
|
Grouped Pool with name '{gp.name}' was created.
|
GROUPED_POOL_DELETE
|
GROUPED POOL
|
USER
|
NONE
|
Grouped Pool with name '{gp.name}' was deleted.
|
GROUPED_POOL_MOVED_BETWEEN_DOMAINS
|
GROUPED POOL
|
USER
|
NONE
|
Grouped Pool {gp_name} has been moved from domain {domain_name} to domain {domain_name}.
|
GROUPED_POOL_RENAME
|
GROUPED POOL
|
USER
|
NONE
|
Grouped Pool with name '{old_name}' was renamed '{gp.name}'.
|
HEARTBEAT_EMAIL_HAS_FAILED
|
PROACTIVE SUPPORT
|
ENVIRONMENT
|
Network / SMTPGW
|
Sending heartbeat to {Destination Name} via {SMTP Gateway} failed. Module: {Module ID}; Error message: '{Error Message}'; timeout expired: {Timeout Expired?}.
|
HEARTBEAT_SMS_HAS_FAILED
|
PROACTIVE SUPPORT
|
ENVIRONMENT
|
Network / SMSGW
|
Sending heartbeat to {Destination Name} via {SMS Gateway} and {SMTP Gateway} failed. Module: {Module ID}; Error message: '{Error Message}'; timeout expired: {Timeout Expired?}.
|
HOST_ADD_PORT
|
HOST
|
USER
|
NONE
|
Port of type {type} and ID '{port_name}' was added to Host with name '{host.name}'.
|
HOST_CONNECTED
|
HOST
|
HOSTS
|
NONE
|
Host '{host}' has connected to the system.
|
HOST_DEFINE
|
HOST
|
USER
|
NONE
|
Host of type {host.type} was defined with name '{host.name}'.
|
HOST_DEFINE_FAILED_TOO_MANY
|
HOST
|
USER
|
NONE
|
Host with name '{name}' could not be defined. You are attempting to define more hosts than the system permits.
|
HOST_DELETE
|
HOST
|
USER
|
NONE
|
Host with name '{host.name}' was deleted.
|
HOST_DISCONNECTED
|
HOST
|
HOSTS
|
iSCSI / Host
|
Host '{host}' has disconnected from the system.
|
HOST_MULTIPATH_OK
|
HOST
|
SOFTWARE
|
NONE
|
Host '{host}' has redundant connections to the system. #paths={npaths}
|
HOST_NO_MULTIPATH_ONLY_ONE_MODULE
|
HOST
|
HOSTS
|
iSCSI / Host
|
Host '{host}' is connected to the system through only one Interface module. #paths={npaths}
|
HOST_NO_MULTIPATH_ONLY_ONE_PORT
|
HOST
|
HOSTS
|
iSCSI / Host
|
Host '{host}' is connected to the system through only one of its ports. #paths={npaths}
|
HOST_REMOVE_PORT
|
HOST
|
USER
|
NONE
|
Port of type {type} and ID '{port_name}' was removed from Host with name '{host.name}' was deleted.
|
HOST_RENAME
|
HOST
|
USER
|
NONE
|
Host with name '{old_name}' was renamed '{host.name}'.
|
HOST_UPDATE
|
HOST
|
USER
|
NONE
|
Host named '{host.name}' was updated.
|
HOT_UPGRADE_ABORTED
|
UPGRADE
|
SOFTWARE
|
Contact IBM
|
Hot upgrade aborted with reason {reason}.
|
HOT_UPGRADE_HAS_FAILED
|
UPGRADE
|
SOFTWARE
|
Contact IBM
|
Hot upgrade failed while {errorneous_state}.
|
HSA_WRONG_IQN
|
HOST
|
HOSTS
|
Contact IBM
|
The event is generated when a wrong IQN (iSCSI Qualified Name) is given as input to hsa_* command (host side accelerator)
|
HTTPS_HAS_FAILED
|
ALERTS
|
ENVIRONMENT
|
Network / HTTP server
|
Sending event {Event Code} ({Event Index}) to {Destination List} via {HTTPS address} failed. Module: {Module ID}; Error message: '{Error Message}' ({HTTP error code}); timeout expired: {Timeout Expired?}.
|
INTERCONNECT_LOSS_RATE_IS_BACK_TO_NORMAL
|
INTERCONNECT
|
ENVIRONMENT
|
NONE
|
Packet Loss rate between a pair of modules is below threshold for the last 60x5 consecutive measurements (300 seconds).
|
INTERCONNECT_LOSS_RATE_IS_HIGH
|
INTERCONNECT
|
ENVIRONMENT
|
Interconnect Network
|
Packet Loss rate between a pair of modules is above threshold for the last 60 consecutive measurements (60 seconds).
|
INTERCONNECT_MTU_SIZE_IS_OK
|
INTERCONNECT
|
ENVIRONMENT
|
NONE
|
Event is generated if: 1.interconnect traffic MTU size along all paths between the cluster modules is same or greater than the iSCSI traffic MTU size between the hosts and the module 2.MTU discovery has found that the iSCSI MTU size is same or greater than INTERCONNECT_MTU_SIZE.
|
INTERCONNECT_MTU_SIZE_IS_SMALL
|
INTERCONNECT
|
ENVIRONMENT
|
Interconnect MTU
|
Event is generated if: 1.interconnect traffic MTU size along all paths between modules is smaller than the iSCSI traffic MTU size between the hosts and the module. 2.MTU discovery has found that the iSCSI MTU size is below INTERCONNECT_MTU_SIZE
|
INTERCONNECT_RTT_IS_BACK_TO_NORMAL
|
INTERCONNECT
|
ENVIRONMENT
|
NONE
|
Network Round Trip Time measurement between a pair of modules is below threshold for the last 60x5 consecutive measurements (300 seconds).
|
INTERCONNECT_RTT_IS_HIGH
|
INTERCONNECT
|
ENVIRONMENT
|
Interconnect Network
|
Network Round Trip Time measurement between a pair of modules is above threshold for the last 60 consecutive measurements (60 seconds).
|
INTERFACE_DISCONNECTED_FROM_TARGET
|
TARGET
|
ENVIRONMENT
|
Target Connectivity
|
Interface node on module {module} cannot access target '{target}' through any gateway module.
|
INTERFACE_RECONNECTED_TO_TARGET
|
TARGET
|
ENVIRONMENT
|
NONE
|
Interface node on module {module} can access target '{target}'.
|
INTERFACE_SERVICES_ACTIVATED
|
TARGET
|
SOFTWARE
|
NONE
|
Interface services of {Module ID} were activated.
|
IO_PAUSED_FOR_CONS_GROUP
|
XCG
|
USER
|
NONE
|
Pause IO on CG with name '{cg_name}' was started with {timeout}ms timeout. Token is '{token}'.
|
IO_RESUMED_FOR_CONS_GROUP_AUTOMATICALLY
|
XCG
|
SOFTWARE
|
NONE
|
Pause IO on CG with name '{cg_name}' and token '{token}' was resumed after snapgroup creation.
|
IO_RESUMED_FOR_CONS_GROUP_EXPLICITLY
|
XCG
|
USER
|
NONE
|
Pause IO on CG with name '{cg_name}' and token '{token}' was resumed by user request.
|
IO_RESUMED_FOR_CONS_GROUP_UPON_SYSTEM_ERROR
|
XCG
|
SOFTWARE
|
Verify XCG
|
Pause IO on CG with name '{cg_name}' and token '{token}' was resumed after system error.
|
IO_RESUMED_FOR_CONS_GROUP_UPON_TIMEOUT_EXPIRATION
|
XCG
|
SOFTWARE
|
Verify XCG
|
Pause IO on CG with name '{cg_name}' and token '{token}' was canceled after timeout.
|
IOS_RESTORED_AFTER_HOT_UPGRADE
|
HOT UPGRADE
|
SOFTWARE
|
NONE
|
System is able to perform I/Os after a hot upgrade.
|
IP_ACCESS_CANNOT_RESOLVE_ADDRESS
|
IP INTERFACE
|
ENVIRONMENT
|
Network / IP ports
|
Cannot resolve address '{address}' added to the IP access group {IP access group name}.
|
IP_ACCESS_FAILED_SETTING_RULES
|
IP INTERFACE
|
ENVIRONMENT
|
Network / IP ports
|
Failed setting IP access rules.
|
IPINTERFACE_ADD_PORT
|
IP INTERFACE
|
USER
|
NONE
|
Port #{port index} was added to ISCSI IP Interface with name '{Interface name}'
|
IPINTERFACE_CREATE
|
IP INTERFACE
|
USER
|
NONE
|
A new iscsi IP Interface was defined with name '{Interface name}' on module {module} with ports '{port list}' and IP address {IP address}
|
IPINTERFACE_DELETE
|
IP INTERFACE
|
USER
|
NONE
|
ISCSI IP Interface with name '{Interface name}' was deleted
|
IPINTERFACE_REMOVE_PORT
|
IP INTERFACE
|
USER
|
NONE
|
Port #{port index} was removed from ISCSI IP Interface with name '{Interface name}'
|
IPINTERFACE_RENAME
|
IP INTERFACE
|
USER
|
NONE
|
ISCSI IP Interface with name '{old name}' and was renamed '{Interface name}'
|
IPINTERFACE_UPDATE
|
IP INTERFACE
|
USER
|
NONE
|
ISCSI IP Interface with name '{Interface name}' was updated. Its IP address is {IP address}
|
IPINTERFACE_UPDATE_INTERCONNECT
|
IP INTERFACE
|
USER
|
NONE
|
The event is generated when the user changes the MTU of the interconnect network.
|
IPINTERFACE_UPDATE_MANAGEMENT
|
IP INTERFACE
|
USER
|
NONE
|
Management IP Interfaces were updated. Management IPs are {IP addresses}
|
IPINTERFACE_UPDATE_MANAGEMENT_IPV6
|
IP INTERFACE
|
USER
|
NONE
|
Management IP Interfaces were updated. Management IPv6 addresses are {IPv6 addresses}
|
IPINTERFACE_UPDATE_VPN
|
IP INTERFACE
|
USER
|
NONE
|
VPN IP Interfaces were updated. VPN IPs are {IP addresses}
|
IPINTERFACE_UPDATE_VPN_IPV6
|
IP INTERFACE
|
USER
|
NONE
|
VPN IPv6 Interfaces were updated. VPN IPv6 addresses are {IP addresses}
|
IPSEC_CONNECTION_ADDED
|
SECURITY
|
USER
|
NONE
|
A new IPSec connection named '{name}' was added
|
IPSEC_CONNECTION_REMOVED
|
SECURITY
|
USER
|
NONE
|
The IPSec connection named '{name}' was removed
|
IPSEC_CONNECTION_UPDATED
|
SECURITY
|
USER
|
NONE
|
The IPSec connection named '{name}' was updated
|
IPSEC_DISABLED
|
SECURITY
|
USER
|
NONE
|
IPSec was disabled
|
IPSEC_ENABLED
|
SECURITY
|
USER
|
NONE
|
IPSec was enabled
|
IPSEC_TUNNEL_CLOSED
|
SECURITY
|
USER
|
NONE
|
The IPSec tunnel named '{name}' between module {Module} and {Right IP} was closed
|
IPSEC_TUNNEL_OPENED
|
SECURITY
|
USER
|
NONE
|
The IPSec tunnel named '{name}' between module {Module} and {Right IP} was opened
|
ISCSI_PORT_HAS_FAILED
|
HOST
|
HARDWARE / ENVIRONMENT
|
iSCSI port
|
ISCSI port service {port} has failed due to {code}{codestr} (attempt number {Number of retries})
|
ISCSI_PORT_RESTART
|
HOST
|
SOFTWARE
|
NONE
|
ISCSI port service {port} was restarted due to {code}{codestr}
|
LDAP_AUTHENTICATION_ACTIVATED
|
LDAP
|
USER
|
NONE
|
LDAP authentication activated.
|
LDAP_AUTHENTICATION_DEACTIVATED
|
LDAP
|
USER
|
NONE
|
LDAP authentication deactivated.
|
LDAP_CONFIGURATION_CHANGED
|
LDAP
|
USER
|
NONE
|
LDAP configuration has changed.
|
LDAP_CONFIGURATION_RESET
|
LDAP
|
USER
|
NONE
|
LDAP configuration has reset.
|
LDAP_SERVER_ACCESSIBLE
|
LDAP
|
ENVIRONMENT
|
NONE
|
LDAP server {FQDN} is now accessible.
|
LDAP_SERVER_INACCESSIBLE
|
LDAP
|
ENVIRONMENT
|
LDAP Server / network
|
LDAP server {FQDN} is inaccessible.
|
LDAP_SERVER_WAS_ADDED
|
LDAP
|
USER
|
NONE
|
LDAP server '{Server FQDN}' was added to the system.
|
LDAP_SERVER_WAS_REMOVED
|
LDAP
|
USER
|
NONE
|
LDAP server '{Server FQDN}' was removed from the system.
|
LDAP_SSL_CERTIFICATE_ABOUT_TO_EXPIRE
|
LDAP
|
SOFTWARE
|
Renew SSL certificate
|
SSL Certificate of LDAP server '{Server FQDN}' is about to expire on {Expiration Date} ({Counter} notification).
|
MAP_PROXY_VOLUME
|
HOST
|
USER
|
NONE
|
Int Hyper-Scale Mobility Volume with name '{name}' was mapped to LUN '{LUN}' for {host_or_cluster} with name '{host}'.
|
MAP_VOLUME
|
HOST
|
USER
|
NONE
|
Volume with name '{volume.name}' was mapped to LUN '{LUN}' for {host_or_cluster} with name '{host}'.
|
MEDIUM_ERROR_IN_DATA_MIGRATION
|
MEDIUM ERROR
|
ENVIRONMENT
|
Migration source
|
Medium error in data migration Into volume '{Volume Name}' at LBA {LBA} for {Length} blocks.
|
MEDIUM_ERROR_NOT_RECOVERED
|
MEDIUM ERROR
|
HARDWARE
|
Monitor if reoccurs
|
Medium error on volume={Volume}, logical-partition={Logical Partition Number}, offsetted-logical-partition={Offsetted Logical Partition Number} could not be recovered due to {Reason}.
|
MEDIUM_ERROR_RECOVERED
|
MEDIUM ERROR
|
HARDWARE
|
NONE
|
Medium error on volume={Volume}, logical-partition={Logical Partition Number}, offsetted-logical-partition={Offsetted Logical Partition Number} was recovered.
|
MEMORY_COMMITMENT_OK
|
SOFTWARE NODE
|
SOFTWARE
|
NONE
|
{module} is {difference} KB below memory commit limit - returned to a safe margin.
|
MEMORY_ECC_ERRORS_DETECTED
|
SOFTWARE NODE
|
HARDWARE
|
DIMM
|
Memory ECC errors were detected on {Module}.
|
METADATA_DELETE
|
INTEGRATION
|
USER
|
NONE
|
Metadata object deleted for {Object type} with name '{Object name}'.
|
METADATA_SERVICE_DB_CREATE
|
INTEGRATION
|
USER
|
NONE
|
Database {DB} was created
|
METADATA_SERVICE_DB_DELETE
|
INTEGRATION
|
USER
|
NONE
|
Database {DB} was deleted
|
METADATA_SERVICE_ENABLE
|
INTEGRATION
|
USER
|
NONE
|
Metadata service is now enabled
|
METADATA_SET
|
INTEGRATION
|
USER
|
NONE
|
{Object type} with name '{Object name}' has new metadata value.
|
MIRROR_ACTIVATE
|
MIRROR
|
USER
|
NONE
|
The Remote Mirror of peer '{local peer name}' on Target '{target name}' was activated.
|
MIRROR_AUTO_FIX_REACHED_LIMIT
|
MIRROR
|
SOFTWARE
|
Contact IBM
|
A remote checksum diff for mirror '{local peer name}' cannot be fixed automatically because we reached the auto fix limit.
|
MIRROR_CANNOT_CREATE_LRS_TOO_MANY_VOLUMES
|
MIRROR
|
SOFTWARE
|
Verify volume space
|
Synchronization of remote mirror of peer '{local peer name}' on target '{target name}' can not be synced , insufficent volume available for this operation.
|
MIRROR_CANNOT_CREATE_SYNC_JOB_TOO_MANY_VOLUMES
|
MIRROR
|
SOFTWARE
|
Verify volume space
|
Synchronization of remote mirror of peer '{local peer name}' on target '{target name}' can not be synced , insufficent volume available for this operation.
|
MIRROR_CHANGE_DESIGNATION
|
MIRROR
|
USER
|
NONE
|
Local peer '{local peer name}' switched its designated role with peer on Target '{target name}'. It is now {designation}.
|
MIRROR_CHANGE_RPO
|
MIRROR
|
USER
|
NONE
|
RPO or Mirror of local peer '{local peer name}' is now {RPO}.
|
MIRROR_CONS_GROUP_SNAPSHOTS_CREATE
|
MIRROR
|
USER
|
NONE
|
Mirrored Snapshot Group for Consistency Group with name '{cg.name}' was created with name '{cs_name}'.
|
MIRROR_CONS_GROUP_SNAPSHOTS_OVERWRITE
|
MIRROR
|
USER
|
NONE
|
Mirrored Snapshot Group named '{cs_name}' was overriden for Consistency Group with name '{cg.name}'.
|
MIRROR_CREATE
|
MIRROR
|
USER
|
NONE
|
A remote mirror was defined for Volume '{local volume name}'on Target '{target name}'. Remote Volume is '{remote volume name}'.
|
MIRROR_CREATE_FAILED_TARGET_NOT_CONNECTED
|
MIRROR
|
ENVIRONMENT
|
Target Connectivity
|
Target could not be reached. Target with name '{target.name}' is currently not connected.
|
MIRROR_CREATE_SLAVE
|
MIRROR
|
USER
|
NONE
|
A remote mirror was defined by Target '{target name}' for Volume '{local volume name}'. Remote Volume is '{remote volume name}'.
|
MIRROR_DEACTIVATE
|
MIRROR
|
USER
|
NONE
|
The Remote Mirror of peer '{local peer name}' on Target '{target name}' was deactivated.
|
MIRROR_DEACTIVATE_CONFIGURATION_ERROR
|
MIRROR
|
USER
|
NONE
|
The Remote Mirror of peer '{local peer name}' on Target '{target name}' was deactivated since the Mirror configuration on the slave machine has changed.
|
MIRROR_DEACTIVATE_SECONDARY_LOCKED
|
MIRROR
|
USER
|
NONE
|
The Remote Mirror of peer '{local peer name}' on Target '{target name}' was deactivated since the Pool on the secondary machine was locked.
|
MIRROR_DELETE
|
MIRROR
|
USER
|
NONE
|
The Remote Mirror relation of peer '{local peer name}' to a peer on Target '{target name}' was deleted.
|
MIRROR_END_SYNC_FAILED_CONFIGURATION_ERROR
|
MIRROR
|
SOFTWARE
|
Verify configuration
|
Configuration of remote mirror of peer '{local peer name}' on target '{target name}' does not match local configuration.
|
MIRROR_INCOMPATIBLE_VERSION_FOR_UNMAP_SUPPORT
|
MIRROR
|
SOFTWARE
|
Check Compatibility
|
Mirror of peer '{local peer name}' on target '{target name}' cannot support unmap, remote machine has incompatible version.
|
MIRROR_IS_LAGGING_BEYOND_ABSOLUTE_THRESHOLD
|
MIRROR
|
SOFTWARE
|
Link Bandwidth
|
Last Replication Time of Mirror of local peer '{local peer name}' is {Last Replication Time}.
|
MIRROR_IS_LAGGING_BEYOND_PERCENT_THRESHOLD
|
MIRROR
|
SOFTWARE
|
Link Bandwidth
|
Last Replication Time of Mirror of local peer '{local peer name}' is {Last Replication Time}.
|
MIRROR_REESTABLISH_FAILED_CONFIGURATION_ERROR
|
MIRROR
|
SOFTWARE
|
Verify configuration
|
Mirror reestablish failed. Either configuration of remote mirror of peer '{local peer name}' on target '{target name}' does not match local configuration.
|
MIRROR_REESTABLISH_FAILED_TOO_MANY_VOLUMES
|
MIRROR
|
SOFTWARE
|
Verify configuration
|
Last Consistent Snapshot of Slave peer '{local peer name}' could not be created. Maximal number of Volumes are already defined.
|
MIRROR_RESYNC_FAILED
|
MIRROR
|
SOFTWARE
|
Verify configuration
|
Synchronization of meta data with mirror failed. Configuration of remote mirror of volume '{local volume name}' on target '{target name}' does not match local configuration.
|
MIRROR_RESYNC_FAILED_DUE_TO_THIN_PROVISIONING
|
MIRROR
|
SOFTWARE
|
Target capacity
|
Synchronization of bitmaps with mirror failed. Not enough hard capacity left in Pool of volume '{mirror.local_volume_name}'.
|
MIRROR_REVERSE_ROLE_OF_PEER_WITH_LCS_TO_MASTER
|
MIRROR
|
USER
|
NONE
|
Local peer '{local peer name}' is now Master of a peer on Target '{target name}' External last consistent snapshot should be deleted manually .
|
MIRROR_REVERSE_ROLE_TO_MASTER
|
MIRROR
|
USER
|
NONE
|
Local peer '{local peer name}' is now Master of a peer on Target '{target name}'.
|
MIRROR_REVERSE_ROLE_TO_SLAVE
|
MIRROR
|
USER
|
Link Bandwidth
|
Local peer '{local peer name}' is now Slave of a peer on Target '{target name}'.
|
MIRROR_RPO_LAGGING
|
MIRROR
|
ENVIRONMENT
|
Link Bandwidth
|
Mirror of local peer '{local peer name}' is now behind its specified RPO.
|
MIRROR_RPO_OK
|
MIRROR
|
ENVIRONMENT
|
NONE
|
Mirror of local peer '{local peer name}' is now ahead of its specified RPO.
|
MIRROR_SCHEDULE_CHANGE
|
MIRROR
|
USER
|
NONE
|
Schedule of remote mirror of '{local peer name}' is now '{schedule name}'.
|
MIRROR_SLAVE_SNAPSHOT_CREATE
|
MIRROR
|
USER
|
NONE
|
Mirrored Snapshot named '{snapshot.name}' was created for volume named '{volume.name}'.
|
MIRROR_SLAVE_SNAPSHOT_OVERWRITE
|
MIRROR
|
USER
|
NONE
|
Mirrored Snapshot named '{snapshot.name}' was overriden for volume named '{volume.name}'.
|
MIRROR_SNAPGROUP_CREATE_FAILED
|
MIRROR
|
SOFTWARE
|
Verify target
|
Remote snapshot group named '{snapshot group name}' was not created successfully. Error code is '{error}'
|
MIRROR_SNAPSHOT_CREATE
|
MIRROR
|
USER
|
NONE
|
Mirrored Snapshot named '{snapshot.name}' was created for volume named '{volume.name}'.
|
MIRROR_SNAPSHOT_CREATE_FAILED
|
MIRROR
|
SOFTWARE
|
Verify target
|
Remote snapshot named '{snapshot name}' was not created successfully. Error code is '{error}'
|
MIRROR_SNAPSHOT_OVERWRITE
|
MIRROR
|
USER
|
NONE
|
Mirrored Snapshot named '{snapshot.name}' was overriden for volume named '{volume.name}'.
|
MIRROR_SWITCH_ROLES_TO_MASTER
|
MIRROR
|
USER
|
NONE
|
Local peer '{local peer name}' switched roles with peer on Target '{target name}'. It is now Master.
|
MIRROR_SWITCH_ROLES_TO_SLAVE
|
MIRROR
|
USER
|
NONE
|
Local peer '{local peer name}' switched roles with peer on Target '{target name}'. It is now Slave.
|
MIRROR_SYNC_ENDED
|
MIRROR
|
SOFTWARE
|
NONE
|
Synchronization of remote mirror of peer '{local peer name}' on target '{target name}' has ended.
|
MIRROR_SYNC_STARTED
|
MIRROR
|
SOFTWARE
|
NONE
|
Synchronization of remote mirror of volume '{local volume name}' on Target '{target name}' has started.
|
MIRROR_SYNCHRONIZATION_TYPE_CHANGED
|
MIRROR
|
USER
|
NONE
|
Synchronization of Mirror of peer '{local peer name}' is now '{mirror synchronization type}'.
|
MIRRORING_CONNECTIVITY_TO_NON_XIV_TARGET
|
MIRROR
|
ENVIRONMENT
|
Verify target
|
Gateway Node #{Node ID}: connection to {target name}:{target's connection index} mirroring connection was established, but being ignored because the remote end is not an XIV target or is not properly configured
|
MISMATCH_IN_INTERFACE_SPEED
|
NETWORK
|
ENVIRONMENT
|
Component speed setting
|
Interface speed on {Component ID} is {actual speed}G, the expected speed is {req speed}G.
|
MODULE_CHANGE_DETECTED
|
MODULE
|
HARDWARE
|
NONE
|
{Component ID} has been changed from a serial of {old_serial} to {new_serial}.
|
MODULE_COMPONENT_TEST_STARTED
|
MODULE
|
USER
|
NONE
|
Test of {Component ID} started.
|
MODULE_CPU_HAS_LESS_CORES_THAN_EXPECTED
|
MODULE
|
HARDWARE
|
Module
|
CPU of {Component ID} has less cores than expected: got {actual cores}, expected {req cores}.
|
MODULE_CPU_HAS_MORE_CORES_THAN_EXPECTED
|
MODULE
|
HARDWARE
|
Module
|
CPU of {Component ID} has more cores than expected: got {actual cores} cores, expected only {req cores}.
|
MODULE_DOWNLOAD_FAILED
|
MODULE
|
HARDWARE
|
Module / Contact IBM
|
Failure occured trying to download current version of the system to module {Module ID}, failure reason: {Reason}.
|
MODULE_DOWNLOAD_TIMEOUT
|
MODULE
|
HARDWARE
|
Module / Contact IBM
|
Timeout expired trying to download current version of the system to module {Module ID} using Interface {Interface}.
|
MODULE_DOWNLOAD_VERSION_TIMEOUT
|
MODULE
|
HARDWARE
|
Module / Contact IBM
|
Timeout expired trying to download current version of the system to module {Module ID}.
|
MODULE_FAILED
|
MODULE
|
HARDWARE
|
Module / Contact IBM
|
{Component ID} failed.
|
MODULE_FAILED_COULD_NOT_BE_POWERED_OFF
|
MODULE
|
HARDWARE
|
Module
|
The failed module {Failed module} could not be powered off.
|
MODULE_FAILED_SHOULD_BE_POWERED_OFF
|
MODULE
|
HARDWARE
|
Contact IBM
|
The failed module {Failed module} should be powered off based upon {Log String}.
|
MODULE_FAILED_TO_FETCH_PATCH_SCRIPT
|
MODULE
|
SOFTWARE
|
Contact IBM
|
Module {Module} failed to fetch patch script {Patch Name}.
|
MODULE_FAILED_WAS_NOT_POWERED_OFF
|
MODULE
|
HARDWARE
|
Module
|
The failed module {Failed module} has not been powered off as a failsafe due to {Failed IPMI module} not having IPMI set.
|
MODULE_FAILED_WAS_POWERED_OFF
|
MODULE
|
HARDWARE
|
Module
|
The failed module {Failed module} has been powered off.
|
MODULE_FINISHED_PHASEOUT
|
MODULE
|
SOFTWARE
|
NONE
|
System finished phasing out {Component ID}.
|
MODULE_HAS_ACQUIRED_DHCP_ADDRESS
|
MODULE
|
SOFTWARE
|
NONE
|
Module {Module ID} acquired DHCP address as part of the module equip process
|
MODULE_HAS_MORE_MEMORY_THAN_EXPECTED
|
MODULE
|
HARDWARE
|
Module
|
{Component ID} has more memory than expected. actual memory size is : {actual_mem} GB ,should be : {req_mem} GB.
|
MODULE_IS_MISSING_DATA_DISKS
|
MODULE
|
HARDWARE
|
Module
|
{Module ID} has {Num Found} of {Num Expected} data disks.
|
MODULE_IS_MISSING_MEMORY
|
MODULE
|
HARDWARE
|
Module
|
{Component ID} is missing memory. Actual memory size is {actual_mem} GB but should be {req_mem} GB.
|
MODULE_IS_MISSING_REQUIRED_MEMORY
|
MODULE
|
HARDWARE
|
Module
|
{Component ID} has less memory ({actual_mem} GB) than is defined for use ({req_mem} GB).
|
MODULE_IS_NOT_UP
|
MODULE
|
HARDWARE
|
Module
|
{Module Component ID} is not up.
|
MODULE_NO_IP_CONNECTIVITY
|
MODULE
|
HARDWARE
|
Module
|
There is no IP connectivity to failed {Component Id}.
|
MODULE_PHASEOUT_FAILURE_REASON
|
MODULE
|
SOFTWARE
|
Contact IBM
|
System could not phaseout {Component ID} due to lack of nodes of type {Node Type}.
|
MODULE_STARTED_PHASEOUT
|
MODULE
|
USER
|
NONE
|
System started phasing out {Component ID}.
|
MODULE_STOPPED_PHASEOUT_DUE_TO_MANAGEMENT_REQUIREMENT
|
MODULE
|
SOFTWARE
|
Contact IBM
|
System stopped phasing out {Component ID} due to management requirement.
|
MODULE_TEMPERATURE_INCONSISTENT_WITH_OTHERS
|
MODULE
|
HARDWARE
|
Module
|
{Component ID} External temperature not consistent with other modules.
|
NETWORK_LINK_FLOW_CONTROL_OFF
|
NETWORK
|
ENVIRONMENT
|
Network Link
|
Network Interface {Interface Role} #{Interface Role Index} on {Component ID} has flow control turned off.
|
NETWORK_LINK_FLOW_CONTROL_ON
|
NETWORK
|
ENVIRONMENT
|
NONE
|
Network Interface {Interface Role} #{Interface Role Index} on {Component ID} has flow control turned on.
|
NETWORK_LINK_HAS_DATA
|
NETWORK
|
ENVIRONMENT
|
Network Link
|
Network Interface {Interface Role} #{Interface Index} on {Component ID} - link has data flowing through again.
|
NETWORK_LINK_IS_NOW_DOWN
|
NETWORK
|
ENVIRONMENT
|
Network Link
|
Network Interface {Interface Role} #{Interface Index} on {Component ID} - link disconnected.
|
NETWORK_LINK_IS_NOW_UP
|
NETWORK
|
ENVIRONMENT
|
NONE
|
Network Interface {Interface Role} #{Interface Index} on {Component ID} - link regained.
|
NETWORK_LINK_NO_DATA
|
NETWORK
|
ENVIRONMENT
|
Network Link
|
Network Interface {Interface Role} #{Interface Index} on {Component ID} - link has no data flowing through for the last {Time Not flowing} seconds.
|
NETWORK_LINK_NO_DATA_LONG
|
NETWORK
|
ENVIRONMENT
|
Network Link
|
Network Interface {Interface Role} #{Interface Index} on {Component ID} - link has no data flowing through for the last {Time Not flowing} seconds.
|
NETWORK_LINK_PARTIAL_LOSS
|
NETWORK
|
ENVIRONMENT
|
Network Link
|
Network Interface {Interface Role} #{Interface Role Index} on {Component ID} has partial packet loss at a rate of {Packet Error Rate}.
|
NETWORK_LINK_RETURNED_TO_NORMAL
|
NETWORK
|
ENVIRONMENT
|
NONE
|
Network Interface {Interface Role} #{Interface Role Index} on {Component ID} no longer has partial packet loss.
|
NETWORK_LINK_WAS_RESET_CONSECUTIVELY
|
NETWORK
|
ENVIRONMENT
|
Network Link
|
Network Interface {Interface Role} #{Interface Index} on {Component ID} - link was reset consecutively .
|
NEW_TIME_CHANGE_IS_INVALID
|
SYSTEM
|
USER
|
Check new time
|
Setting time to {Seconds} seconds and {USecs} Usecsonds on module {Module}is invalid and was denied.
|
NIC_FAILED
|
NETWORK
|
HARDWARE
|
NIC
|
{Component ID} has failed. Hardware status: {Status}.
|
NTP_SERVER_TIME_DIFFERENCE_TOO_BIG
|
SYSTEM
|
ENVIRONMENT
|
NTP Server
|
NTP server {NTP Server} sent a transaction with time difference of {Delta} seconds which exceeds the maximal difference of {Max Allowed} seconds. Transaction will be ignored, please check NTP server's and system's times.
|
OBJECT_ATTACHED_TO_DOMAIN
|
DOMAIN
|
USER
|
NONE
|
Object {object_name} of type {object_type} has been added to domain {domain_name}.
|
OBJECT_REMOVED_FROM_DOMAIN
|
DOMAIN
|
USER
|
NONE
|
Object {object_name} of type {object_type} has been removed from domain {domain_name}.
|
OPTIMIZING_DATA_REDIST_STARTED
|
SYSTEM
|
SOFTWARE
|
NONE
|
Starting optimizing data transfer to new disks.
|
PATCH_SCRIPT_ADDED
|
PATCH SCRIPT
|
USER
|
NONE
|
Added patch {Patch Name}.
|
PATCH_SCRIPT_DELETED
|
PATCH SCRIPT
|
USER
|
NONE
|
Deleted patch {Patch Name}.
|
PATCH_SCRIPT_EXECUTION_ENDED
|
PATCH SCRIPT
|
SOFTWARE
|
NONE
|
Patch script {Patch Name} execution on module {Module} with pid {Process ID} ended with return code {Return Code}
|
PATCH_SCRIPT_EXECUTION_STARTED
|
PATCH SCRIPT
|
SOFTWARE
|
NONE
|
Patch script {Patch Name} execution on module {Module} started with pid {Process ID}
|
PATCH_SCRIPT_FAILED_TO_EXECUTE
|
PATCH SCRIPT
|
SOFTWARE
|
Contact IBM
|
Patch script {Patch Name} execution failed on module {Module}
|
PATCH_SCRIPT_UPDATED
|
PATCH SCRIPT
|
USER
|
NONE
|
Updated patch {Patch Name}.
|
PERF_CLASS_ADD_DOMAIN
|
PERFORMANCE CLASS
|
USER
|
NONE
|
Domain {domain_name} was added to Performance Class {name}
|
PERF_CLASS_ADD_HOST
|
PERFORMANCE CLASS
|
USER
|
NONE
|
Host with name '{host_name}' was added to Performance Class with name '{name}'
|
PERF_CLASS_ADD_POOL
|
PERFORMANCE CLASS
|
USER
|
NONE
|
Pool with name '{pool.name}' was added to Performance Class with name '{pool.perf_class}'
|
PERF_CLASS_CREATE
|
PERFORMANCE CLASS
|
USER
|
NONE
|
Performance Class with name '{name}' was created
|
PERF_CLASS_DELETE
|
PERFORMANCE CLASS
|
USER
|
NONE
|
Performance Class with name '{name}' was deleted
|
PERF_CLASS_MAX_BW_RATE_UPDATED
|
PERFORMANCE CLASS
|
USER
|
NONE
|
Performance Class {name} max BW rate was changed to {BW rate}
|
PERF_CLASS_MAX_IO_RATE_UPDATED
|
PERFORMANCE CLASS
|
USER
|
NONE
|
Performance Class {name} max IO rate was changed to {IO rate}
|
PERF_CLASS_RATE_AT_LIMIT
|
PERFORMANCE CLASS
|
SOFTWARE
|
Host / Module
|
Performance class '{perf_class}' on {Module Id} reached its limit of {Limit} {Limit Name}, IOs being throttled.
|
PERF_CLASS_REMOVE_DOMAIN
|
PERFORMANCE CLASS
|
USER
|
NONE
|
Domain {domain_name} was removed from Performance Class {name}
|
PERF_CLASS_REMOVE_HOST
|
PERFORMANCE CLASS
|
USER
|
NONE
|
Host with name '{host_name}' was removed from Performance Class with name '{name}'
|
PERF_CLASS_REMOVE_POOL
|
PERFORMANCE CLASS
|
USER
|
NONE
|
Pool with name '{pool.name}' was removed from Performance Class with name '{name}'
|
PERF_CLASS_RESOURCE_EXHAUSTION
|
PERFORMANCE CLASS
|
SOFTWARE
|
Host / Module
|
Exhausted all allowed resources for performance classes on {Module Id}, BUSY until resources available.
|
PERF_CLASS_RESOURE_EXHAUSTION
|
PERFORMANCE CLASS
|
SOFTWARE
|
Host / Module
|
Exhausted all allowed resources for performance classes on {Module Id}, BUSY until resources available.
|
PKCS12_CERTIFICATE_ADDED
|
SECURITY
|
USER
|
NONE
|
A new PKCS#12 named '{name}' with fingerprInt '{fingerprInt}' was added.
|
PKI_RENAME
|
SECURITY
|
USER
|
NONE
|
PKI with the name '{old name}' was renamed to '{new name}'
|
PKI_UPDATED
|
SECURITY
|
USER
|
NONE
|
PKI with the name '{name}' and fingerprInt '{fingerprInt}' was updated
|
POOL_ADDED_TO_DOMAIN
|
POOL
|
USER
|
NONE
|
Pool {pool_name} has been added to domain {domain_name}.
|
POOL_CHANGE_LOCK_BEHAVIOR
|
POOL
|
USER
|
NONE
|
Lock Behavior of Storage Pool with name '{pool.name}' is now '{state}'.
|
POOL_CHANGE_PERF_CLASS
|
POOL
|
USER
|
NONE
|
Performance Class of Storage Pool with name '{pool.name}' is now '{pool.perf_class}'.
|
POOL_CONFIG_SNAPSHOTS
|
POOL
|
USER
|
NONE
|
Management policy of Mirroring snapshots of Storage Pool with name '{pool.name}' has changed'.
|
POOL_CREATE
|
POOL
|
USER
|
NONE
|
Storage Pool of size {pool.size}GB was created with name '{pool.name}'.
|
POOL_CREATE_FAILED_TOO_MANY
|
POOL
|
SOFTWARE
|
Pool count
|
Storage Pool with name '{name}' could not be created. You are attempting to add more Storage Pools than the system permits.
|
POOL_CREATE_THIN
|
POOL
|
USER
|
NONE
|
Storage Pool of soft size {pool.soft_size}GB and hard_ size {pool.hard_size}GB was created with name '{pool.name}'.
|
POOL_DELETE
|
POOL
|
USER
|
NONE
|
Storage Pool with name '{pool.name}' was deleted.
|
POOL_MOVED_BETWEEN_DOMAINS
|
POOL
|
USER
|
NONE
|
Pool {pool_name} has been moved from domain {domain_name} to domain {domain_name}.
|
POOL_REMOVED_FROM_DOMAIN
|
POOL
|
USER
|
NONE
|
Pool {pool_name} has been removed from domain {domain_name}.
|
POOL_RENAME
|
POOL
|
USER
|
NONE
|
Storage Pool with name '{old_name}' was renamed '{pool.name}'.
|
POOL_RESIZE
|
POOL
|
USER
|
NONE
|
Storage Pool with name '{pool.name}' was resized from size {old_size}GB to {pool.size}GB.
|
POOL_RESIZE_SNAPSHOTS
|
POOL
|
USER
|
NONE
|
Snapshot size of Storage Pool with name '{pool.name}' was resized from size {old_size}GB to {pool.snapshot_size}GB.
|
POOL_RESIZE_THIN
|
POOL
|
USER
|
NONE
|
Storage Pool with name '{pool.name}' was resized from soft size {old_soft_size}GB and hard size {old_hard_size}GB to soft size {pool.soft_size}GB and hard size {pool.hard_size}GB.
|
PORT_PREP_FOR_UPGRADE_TIMED_OUT
|
UPGRADE
|
SOFTWARE
|
Host / Port
|
Preparation of {port_type} port '{local_port_name}' for hot-upgrade timed out due to host '{host_name}' port '{host_port_name}'{host_port_addr}
|
POST_UPGRADE_SCRIPT_FINISHED
|
UPGRADE
|
SOFTWARE
|
NONE
|
Post-upgrade script finished successfully.
|
POST_UPGRADE_SCRIPT_INVOCATION_FAILED
|
UPGRADE
|
SOFTWARE
|
Contact IBM
|
Invocation of post-upgrade script failed with error {error}.
|
POST_UPGRADE_SCRIPT_REPORTED_FAILURE
|
UPGRADE
|
SOFTWARE
|
Contact IBM
|
Post upgrade script reported failure. Script output: {explanation}.
|
POST_UPGRADE_SCRIPT_STARTED
|
UPGRADE
|
SOFTWARE
|
NONE
|
Post-upgrade script started.
|
POWER_SUPPLY_UNIT_STATUS_IS_OK
|
HARDWARE
|
SOFTWARE
|
NONE
|
The status of {Component ID} is now OK.
|
PRE_UPGRADE
|
UPGRADE
|
SOFTWARE
|
NONE
|
System preparing an upgrade procedure type {type} .
|
PRE_UPGRADE_SCRIPT_DISAPPROVES
|
UPGRADE
|
SOFTWARE
|
Contact IBM
|
Upgrade cannot commence because some of the validations in the pre-upgrade script failed. Explanation: {explanation}.
|
PRE_UPGRADE_SCRIPT_INVOCATION_FAILED
|
UPGRADE
|
SOFTWARE
|
Contact IBM
|
Invocation of pre-upgrade script failed with error {error}.
|
PRE_UPGRADE_VALIDATION_FAILED
|
UPGRADE
|
SOFTWARE
|
Contact IBM
|
One of the pre-upgrade validations failed with status {error}.
|
PRIVATE_KEY_ADDED
|
SECURITY
|
USER
|
NONE
|
A new private key named '{name}' with fingerprInt '{fingerprInt}' and size {key_size} bits was added.
|
PSU_CHANGE_DETECTED
|
IBM POWER®
|
HARDWARE
|
NONE
|
{Component ID} has been changed from a serial number '{old_serial}', part number '{old_part_number}', to serial number '{new_serial}' and part number '{new_part_number}'.
|
PSU_FIRMWARE_VERSION_UNEXPECTED
|
POWER
|
HARDWARE
|
Contact IBM
|
{Component}, of model '{Model}', has an unexpected code-level {Major}.{Minor}, which is old and should be upgraded.
|
PSU_MODEL_IS_OK_NOW
|
POWER
|
HARDWARE
|
NONE
|
Model '{PSU Model}' for {PSU} is valid.
|
PSU_MODEL_MIX_IS_OK_NOW
|
POWER
|
HARDWARE
|
NONE
|
{PSU-1}, of model '{PSU-1 Model}', is compatible with {PSU-2} of model '{PSU-2 Model}'.
|
PSU_WAS_INSTALLED
|
POWER
|
USER
|
NONE
|
{Component ID} with a serial number '{Serial}' and part number '{Part Number}' was installed in the system.
|
PSU_WAS_REMOVED
|
POWER
|
USER
|
NONE
|
{Component ID} with a serial number '{Serial}' and part number '{Part Number}' was removed from the system.
|
QoS_HAS_BEEN_TRIGGERED
|
HOST
|
SOFTWARE
|
Host IO count
|
Queues on port '{port_id}' in Module {Module Id} caused QoS to be activated.
|
REMOTE_OPERATION_FAILED_TIMED_OUT
|
TARGET
|
SOFTWARE
|
Target Connectivity
|
Operation on remote machine timed out. Invoking '{Function Name}' on target '{Target Name}' timed out.
|
REMOTE_SUPPORT_CLIENT_MOVED
|
REMOTE SUPPORT
|
SOFTWARE
|
NONE
|
The remote support client moved from {Old Module} to {New Module}.
|
REMOTE_SUPPORT_CLIENT_NO_AVAILABLE_MODULES
|
REMOTE SUPPORT
|
SOFTWARE
|
Modules
|
No live modules with {Port Type} ports are available to run the remote support client.
|
REMOTE_SUPPORT_CONNECTED
|
REMOTE SUPPORT
|
USER
|
NONE
|
System connected to remote support center {Destination}.
|
REMOTE_SUPPORT_CONNECTION_LOST
|
REMOTE SUPPORT
|
ENVIRONMENT
|
Modules / Network
|
Connection to remote support center {Destination} failed while the connection was in state {Disconnected Session State}.
|
REMOTE_SUPPORT_DEFINED
|
REMOTE SUPPORT
|
USER
|
NONE
|
Defined remote support center {Name} with IP address {Address} and port {Port}.
|
REMOTE_SUPPORT_DELETED
|
REMOTE SUPPORT
|
USER
|
NONE
|
Deleted remote support center {Name}.
|
REMOTE_SUPPORT_DISCONNECTED
|
REMOTE SUPPORT
|
USER
|
NONE
|
System disconnected from remote support center {Destination} while the connection was in state {Disconnected Session State}.
|
REMOTE_SUPPORT_IMMINENT_TIMEOUT
|
REMOTE SUPPORT
|
SOFTWARE
|
Timeout value
|
System is about to disconnect busy connection to remote support center {Destination}.
|
REMOTE_SUPPORT_KEY_CLEARED
|
REMOTE SUPPORT
|
USER
|
NONE
|
The event is generated when the command remote_support_key_clear is run successfully
|
REMOTE_SUPPORT_KEY_CREATED
|
REMOTE SUPPORT
|
USER
|
NONE
|
The event is generated when the command remote_support_key_create is run successfully
|
REMOTE_SUPPORT_TIMEOUT
|
REMOTE SUPPORT
|
SOFTWARE
|
Timeout value
|
Connection to remote support center {Destination} timed out while the connection was in state {Disconnected Session State}.
|
RULE_CREATE
|
ALERTS
|
USER
|
NONE
|
Rule with name '{name}' was created.
|
RULE_DELETE
|
ALERTS
|
USER
|
NONE
|
Rule with name '{name}' was deleted.
|
RULE_RENAME
|
ALERTS
|
USER
|
NONE
|
Rule with name '{old name}' was renamed '{new name}'.
|
RULE_UPDATE
|
ALERTS
|
USER
|
NONE
|
Rule with name '{name}' was updated.
|
SAS_CONTROLLER_CHANGE_DETECTED
|
SAS DISK CONTROLLER
|
HARDWARE
|
NONE
|
The SAS controller on module {Module} was changed from a serial of {old_serial} and board assembly of '{old_assembly}' to serial {new_serial} and board assembly '{new_assembly}'.
|
SAS_CONTROLLER_DIED
|
SAS DISK CONTROLLER
|
HARDWARE
|
SAS Controller
|
Severe SAS controller error occurred. Controller was removed from PCI-E bus.
|
SAS_CONTROLLER_FAULT
|
SAS DISK CONTROLLER
|
HARDWARE
|
SAS Controller
|
SAS Controller Firmware on {component ID} faulted with code {Fault Code}
|
SAS_CONTROLLER_FAULT_CLEARED
|
SAS DISK CONTROLLER
|
SOFTWARE
|
NONE
|
SAS Controller Firmware on {component ID} recovered from its fault state.
|
SAS_CONTROLLER_IMPLICIT_RESET_FAILED
|
SAS DISK CONTROLLER
|
HARDWARE
|
SAS Controller
|
SAS driver sent an implicit reset to SAS controller, but it failed.
|
SAS_CONTROLLER_IMPLICIT_RESET_SUCCESSFUL
|
SAS DISK CONTROLLER
|
HARDWARE
|
NONE
|
SAS driver sent an implicit reset to SAS controller, controller was successfully reset.
|
SAS_CONTROLLER_RESET_FAILED
|
SAS DISK CONTROLLER
|
HARDWARE
|
SAS Controller
|
Reset to the SAS controller on {Component ID} has failed. Reset duration {reset duration} milliseconds, IOs pending {IOs Pending}.
|
SAS_CONTROLLER_RESET_SUCCEEDED
|
SAS DISK CONTROLLER
|
HARDWARE
|
NONE
|
Reset to disk {Component ID} succeeded. Reset duration {reset duration} milliseconds, IOs pending {IOs Pending}.
|
SAS_CONTROLLER_RESET_WAS_SENT
|
SAS DISK CONTROLLER
|
SOFTWARE
|
NONE
|
A SAS controller reset was sent on {Component ID}, IOs pending {IOs Pending}.
|
SAS_LINK_ERRORS
|
SAS DISK CONTROLLER
|
HARDWARE
|
SAS Controller
|
SAS link {SAS Type}[{ID}] on module {Module} has too many errors, {Delta} since last sample.
|
SAS_LINK_NO_MORE_ERRORS
|
SAS DISK CONTROLLER
|
HARDWARE
|
NONE
|
SAS link {SAS Type}[{ID}] on module {Module} no longer has errors, {Delta} since last sample.
|
SAS_LINK_SPEED_CHANGE
|
SAS DISK CONTROLLER
|
HARDWARE
|
SAS Controller
|
SAS link {SAS Type}[{ID}] on module {Module} speed changed from {Old Speed} to {New Speed}.
|
SAS_LINK_STATE_CHANGE
|
SAS DISK CONTROLLER
|
HARDWARE
|
Monitor
|
SAS link {SAS Type}[{ID}] on module {Module} changed state from {State} to {State}.
|
SAS_LINK_TOO_MANY_RESETS
|
SAS DISK CONTROLLER
|
HARDWARE
|
SAS Controller
|
SAS link {SAS Type}[{ID}] on module {Module} had {Delta} resets, only {Allowed} resets are allowed. Disk {Disk} will be automatically phased out.
|
SAS_LINK_TOO_MANY_RESETS_PHASEOUT_DISK
|
SAS DISK CONTROLLER
|
HARDWARE
|
SAS Controller
|
SAS link {SAS Type}[{ID}] on module {Module} had {Delta} resets, only {Allowed} resets are allowed. Please phase out disk {Disk}.
|
SAS_RESET_DETECTED
|
SAS DISK CONTROLLER
|
HARDWARE
|
SAS Controller
|
SAS Controller reset was detected on {component ID} total {Reset Count} times.
|
SAS_VERSION_IS_INCONSISTENT
|
SAS DISK CONTROLLER
|
HARDWARE
|
SAS Controller
|
SAS Controller Firmware version on module {Module} version {actual} is inconsistent with persistent version {persistent}.
|
SAS_VERSION_IS_UNEXPECTED
|
SAS DISK CONTROLLER
|
HARDWARE
|
SAS Controller
|
SAS Controller Firmware version on module {Module} version {actual} is old and should be upgraded
|
SATA_SMART_STATUS_READING_FAILED
|
SATA SMART
|
HARDWARE
|
Disk
|
reading SMART attributes of Disk ID {} failed. SMART trip value={}
|
SATA_SMART_STATUS_READING_FAILURE
|
SATA SMART
|
HARDWARE
|
Disk
|
{Component ID} reading SMART attributes failed. SMART trip value={}
|
SCHEDULE_CREATE
|
ASYNC MIRROR
|
USER
|
NONE
|
Schedule was created with name '{schedule name}'.
|
SCHEDULE_DELETE
|
ASYNC MIRROR
|
USER
|
NONE
|
Schedule with name '{schedule name}' was deleted.
|
SCHEDULE_RENAME
|
ASYNC MIRROR
|
USER
|
NONE
|
Schedule with name '{old_name}' was renamed '{schedule name}'.
|
SCHEDULE_UPDATE
|
ASYNC MIRROR
|
USER
|
NONE
|
Schedule with name '{schedule name}' was updated.
|
SECOND_DISK_FAILURE
|
SYSTEM
|
HARDWARE
|
Contact IBM
|
Disk {Component ID} failed during rebuild.
|
SECONDARY_VOLUME_RESIZE
|
MIRROR
|
SOFTWARE
|
NONE
|
Secondary volume with name '{volume.name}' was resized by primary machine from {old_size}GB to {volume.size}GB.
|
SERVICE_FAILED_TO_PHASEIN
|
SERVICE
|
SOFTWARE
|
Contact IBM
|
{Component ID} failed to phase-in.
|
SERVICE_FAILED_TO_RESTART
|
SERVICE
|
SOFTWARE
|
Contact IBM
|
{Component ID} failed to restart.
|
SERVICE_HAS_FAILED
|
SERVICE
|
SOFTWARE
|
Contact IBM
|
{Component ID} has failed.
|
SES_DOUBLE_PSU_FAILURE
|
SCSI ENCLOSURE SERVICES / Power
|
HARDWARE
|
PSUs
|
Both PSUs on {Module} report critical failures, this is probably because of a faulty PDB.
|
SES_PSU_MONITORING_UNAVAILABLE
|
SCSI ENCLOSURE SERVICES / Power
|
HARDWARE
|
PSUs
|
Can't monitor {PSU}, but it seems to supply power.
|
SES_PSU_STATUS_HAS_CHANGED
|
SCSI ENCLOSURE SERVICES / Power
|
HARDWARE
|
PSUs
|
{psu} changed state from {old_state} to {new state}.
|
SES_PSU_VOLTAGE_OK
|
SCSI ENCLOSURE SERVICES / Power
|
HARDWARE
|
None
|
{PSU} {Voltage Type} output DC voltage value is now {Voltage}, which is within the valid range.
|
SHOULD_BE_EMERGENCY_SHUTDOWN
|
SYSTEM
|
SOFTWARE
|
UPS / Power
|
An emergency shutdown has been detected, but UPS control is disabled. Shutdown reason: {Shutdown Reason}.
|
SHUTDOWN_PARAMS
|
SYSTEM
|
SOFTWARE
|
NONE
|
System action is '{Shutdown Action}'. Target state is '{Target State}'. Safemode is '{Safe Mode}'. UPS Sleep Time={UPS sleep time in seconds} seconds.
|
SLAVE_CONS_GROUP_ADD_VOLUME
|
CONSISTENCY GROUP
|
USER
|
NONE
|
Volume with name '{volume.name}' was added to Consistency Group with name '{cg.name}' by its remote peer.
|
SLAVE_CONS_GROUP_REMOVE_VOLUME
|
CONSISTENCY GROUP
|
USER
|
NONE
|
Volume with name '{volume.name}' was removed from Consistency Group with name '{cg.name}' by its remote peer.
|
SLAVE_CONS_GROUP_SNAPSHOTS_CREATE
|
CONSISTENCY GROUP
|
USER
|
NONE
|
Mirrored Snapshot Group for Consistency Group with name '{cg.name}' was created with name '{cs_name}'.
|
SLAVE_CONS_GROUP_SNAPSHOTS_OVERWRITE
|
CONSISTENCY GROUP
|
USER
|
NONE
|
Mirrored Snapshot Group named '{cs_name}' was overriden for Consistency Group with name '{cg.name}'.
|
SMS_GATEWAY_DEFINE
|
ALERTS
|
USER
|
NONE
|
SMS gateway with name '{name}' was defined.
|
SMS_GATEWAY_DELETE
|
ALERTS
|
USER
|
NONE
|
SMS gateway with name '{name}' was deleted.
|
SMS_GATEWAY_PRIORITIZE
|
ALERTS
|
USER
|
NONE
|
SMS gateways were prioritized; the new order is {order}.
|
SMS_GATEWAY_RENAME
|
ALERTS
|
USER
|
NONE
|
SMS gateway with name '{old name}' was renamed '{new name}'.
|
SMS_GATEWAY_UPDATE
|
ALERTS
|
USER
|
NONE
|
SMS gateway with name '{name}' was updated.
|
SMS_HAS_FAILED
|
ALERTS
|
ENVIRONMENT
|
Network/SMS gateway
|
Sending event {Event Code} ({Event Index}) to {Destination List} via {SMS Gateway} and {SMTP Gateway} failed. Module: {Module ID}; Error message: '{Error Message}'; timeout expired: {Timeout Expired?}.
|
SMS_NOT_SENT
|
ALERTS
|
ENVIRONMENT
|
Network/SMS gateway
|
Sending event {Event Code} ({Event Index}) to {Destination List} via {SMS Gateway} and {SMTP Gateway} was waived because of failed SMTP gateway. It will be not be used until {Retry Time}.
|
SMTP_GATEWAY_DEFINE
|
PROACTIVE SUPPORT / ALERTS
|
USER
|
NONE
|
SMTP gateway with name '{name}' was defined.
|
SMTP_GATEWAY_DELETE
|
PROACTIVE SUPPORT / ALERTS
|
USER
|
NONE
|
SMTP gateway with name '{name}' was deleted.
|
SMTP_GATEWAY_FAILED
|
PROACTIVE SUPPORT / ALERTS
|
ENVIRONMENT
|
Network/
SMTP gateway |
SMTP gateway with name '{name}' has failed. It will not be used until {Retry Time}.
|
SMTP_GATEWAY_PRIORITIZE
|
PROACTIVE SUPPORT / ALERTS
|
USER
|
NONE
|
SMTP gateways were prioritized; the new order is {order}.
|
SMTP_GATEWAY_RENAME
|
PROACTIVE SUPPORT / ALERTS
|
USER
|
NONE
|
SMTP gateway with name '{old name}' was renamed '{new name}'.
|
SMTP_GATEWAY_UPDATE
|
PROACTIVE SUPPORT / ALERTS
|
USER
|
NONE
|
SMTP gateway with name '{name}' was updated.
|
SMTP_GATEWAY_VIA_NODE_FAILED
|
PROACTIVE SUPPORT / ALERTS
|
ENVIRONMENT
|
Network / SMTP gateway / Module
|
Sending event {Event Code} ({Event Index}) through {SMTP Gateway} via {Module ID} has failed; Error message: '{Error Message}'.
|
SNAPSHOT_CHANGE_PRIORITY
|
SNAPSHOTS
|
USER
|
NONE
|
Snapshot Delete Priority of Snapshot named '{snapshot.name}' was changed from '{old_priority}' to '{snapshot.delete_priority}'.
|
SNAPSHOT_CREATE
|
SNAPSHOTS
|
USER
|
NONE
|
Snapshot named '{snapshot.name}' was created for volume named '{volume.name}'.
|
SNAPSHOT_CREATE_FAILED_TOO_MANY
|
SNAPSHOTS
|
SOFTWARE
|
Snapshot Count
|
Snapshot for volume named '{volume.name}' could not be created. You are attempting to add more volumes than the system permits.
|
SNAPSHOT_CREATE_MANY
|
SNAPSHOTS
|
SOFTWARE
|
Snapshot Count
|
Created {num_of_vols} snapshots.
|
SNAPSHOT_DELETED_DUE_TO_POOL_EXHAUSTION
|
SNAPSHOTS
|
SOFTWARE
|
Pool Size
|
Snapshot named '{snap.name}' has been deleted because Storage Pool named '{snap.pool_name}' is full.
|
SNAPSHOT_DUPLICATE
|
SNAPSHOTS
|
USER
|
NONE
|
Snapshot named '{snapshot.name}' was created as duplicate of Snapshot named '{original_snapshot.name}'.
|
SNAPSHOT_DUPLICATE_FAILED_TOO_MANY
|
SNAPSHOTS
|
SOFTWARE
|
Snapshot Count
|
Snapshot named '{snapshot.name}' could not be duplicated. You are attempting to add more volumes than the system permits.
|
SNAPSHOT_FORMAT
|
SNAPSHOTS
|
USER
|
NONE
|
Snapshot named '{snapshot.name}' was formatted.
|
SNAPSHOT_GROUP_CHANGE_PRIORITY
|
SNAPSHOTS
|
USER
|
NONE
|
Deletion Priority of all Snapshots in Snapshot Group with name '{cs_name}' were changed from '{old priority}' to '{new priority}'.
|
SNAPSHOT_GROUP_DELETE
|
SNAPSHOTS
|
USER
|
NONE
|
All Snapshots in Snapshot Group with name '{cs_name}' were deleted.
|
SNAPSHOT_GROUP_DELETED_DUE_TO_POOL_EXHAUSTION
|
SNAPSHOTS
|
SOFTWARE
|
Pool Size
|
All Snapshots in Snapshot Group with name '{snapshot.sg_name}' have been deleted because Storage Pool with name '{snapshot.pool_name}' is full.
|
SNAPSHOT_GROUP_DISBAND
|
SNAPSHOTS
|
USER
|
NONE
|
Snapshot Group with name '{cs_name}' was dismantled. All Snapshots which belonged to that Snapshot Group should be accessed directly.
|
SNAPSHOT_GROUP_DUPLICATE
|
SNAPSHOTS
|
USER
|
NONE
|
All Snapshots in Snapshot Group with name '{cs_name}' were duplicated. Duplicate Snapshot Group is named '{new_cs_name}'.
|
SNAPSHOT_GROUP_FORMAT
|
SNAPSHOTS
|
USER
|
NONE
|
All Snapshots in Snapshot Group with name '{cs_name}' were formatted'.
|
SNAPSHOT_GROUP_LOCK
|
SNAPSHOTS
|
USER
|
NONE
|
All Snapshots in Snapshot Group with name '{cs_name}' were locked.
|
SNAPSHOT_GROUP_RENAME
|
SNAPSHOTS
|
USER
|
NONE
|
Snapshot Group with name '{cs_name}' were renamed to '{new_name}'.
|
SNAPSHOT_GROUP_RESTORE
|
SNAPSHOTS
|
USER
|
NONE
|
Volumes were restored from Snapshot Group with name '{cs_name}'.
|
SNAPSHOT_GROUP_UNLOCK
|
SNAPSHOTS
|
USER
|
NONE
|
All Snapshots in Snapshot Group with name '{cs_name}' were unlocked.
|
SNAPSHOT_OVERWRITE
|
SNAPSHOTS
|
USER
|
NONE
|
Snapshot named '{snapshot.name}' was overriden for volume named '{volume.name}'.
|
SNAPSHOT_RESTORE
|
SNAPSHOTS
|
USER
|
NONE
|
Volume named '{volume.name}' was restored from Snapshot named '{snapshot.name}'.
|
SPECIAL_TYPE_SET
|
HOST
|
USER
|
NONE
|
Type of {host_or_cluster} with name '{host}' was set to '{type}'.
|
SSD_ABNORMAL_ERROR
|
SSD CACHE
|
HARDWARE
|
SSD
|
Unit attentions or aborts in the last 30 minutes on {SSD ID}, start lba={start_lba}, last lba={last_lba}, command={command}, latency={latency} ms.
|
SSD_BAD_PERFORMANCE
|
SSD CACHE
|
HARDWARE
|
SSD
|
Bad performance on {SSD ID}, I/O count={I/O Count}, transferred kbytes={kbytes},msecs={seconds}.
|
SSD_BIGGER_THAN_EXPECTED
|
SSD CACHE
|
HARDWARE
|
SSD
|
Installed SSD {Component ID} has a size of {Size}GB which is bigger than the expected size of {Spec Size}GB.
|
SSD_CACHING_DISABLED
|
SSD CACHE
|
USER
|
NONE
|
SSD Caching feature disabled.
|
SSD_CACHING_ENABLED
|
SSD CACHE
|
USER
|
NONE
|
SSD Caching feature enabled. SSDs can now be installed.
|
SSD_CHANGE_WAS_DETECTED
|
SSD CACHE
|
HARDWARE
|
SSD
|
{Component ID} has been changed.
|
SSD_COMPLIANCE_CHECK_FAILED
|
SSD CACHE
|
HARDWARE
|
SSD
|
Installed SSD {Component ID} does not conform to the specification.
|
SSD_COMPONENT_TEST_STARTED
|
SSD CACHE
|
USER
|
NONE
|
Test of {Component ID} started.
|
SSD_CYCLE_INFO
|
SSD CACHE
|
SOFTWARE
|
NONE
|
SSD {Component ID} passed {Cycles} cycles.
|
SSD_DATA_INTEGRITY_ERROR_DETECTED
|
SSD CACHE
|
HARDWARE
|
SSD
|
Read from SSD {Disk ID} failed the Integrity check due to {Reason}, Page Number={Page Number}
|
SSD_DEFERRED_ERROR
|
SSD CACHE
|
HARDWARE
|
SSD
|
SSD {SSD ID} signaled deferred error on start lba={first_lba}, last lba={last_lba}, scsi_opcode={scsi_opcode}, latency={latency} usec, key={key}
|
SSD_DISK_LABELS_MISMATCH
|
SSD CACHE
|
SOFTWARE
|
Contact IBM
|
SSD {SSD ID} has data that mismatches disk {Disk ID}
|
SSD_DOES_NOT_EXIST
|
SSD CACHE
|
HARDWARE
|
SSD / Module
|
SSD {Component ID} doesn't exist.
|
SSD_ERROR_SENSE_INFORMATION
|
SSD CACHE
|
HARDWARE
|
SSD
|
SSD {SSD ID} had sense information indicating an error: {Sense Key Number}/{Sense Code Number 1}/{Sense Code Number 2} (FRU={FRU Code}) {Sense Key} - {Sense Code}.
|
SSD_FOUND_UNEXPECTED
|
SSD CACHE
|
SOFTWARE
|
SSD / Module
|
SSD {Component ID} was found while SSD Caching feature is disabled.
|
SSD_GENERIC_SUPPORT_USED
|
SSD CACHE
|
SOFTWARE
|
NONE
|
SSD {Component ID} using default smart attributes.
|
SSD_HAS_FAILED
|
SSD CACHE
|
HARDWARE
|
SSD
|
SSD {Component ID} Failed.
|
SSD_HIGH_MEDIA_ERROR_RATE_CLEARED
|
SSD CACHE
|
HARDWARE
|
NONE
|
{Component ID} no longer exhibits high media error rate.
|
SSD_HIGH_MEDIA_ERROR_RATE_DETECTED
|
SSD CACHE
|
HARDWARE
|
SSD
|
{Component ID} exhibits high media error rate of rule {rule_type}.
|
SSD_INFO_EXTRA_EVENT
|
SSD CACHE
|
HARDWARE
|
SSD
|
SSD {Component ID} Extra information event.
|
SSD_INTERMIX_DETECTED
|
SSD CACHE
|
HARDWARE
|
SSD
|
SSD {Component ID} of model {SSD model}, by vendor {SSD vendor}, {User message} {Required model}
|
SSD_LARGER_THAN_SYSTEM_SSD_SIZE
|
SSD CACHE
|
HARDWARE
|
SSD
|
SSD {Component ID} has a size of {New size}GB which is larger than system ssd size {System size}GB.
|
SSD_LIFE_GAUGE
|
SSD CACHE
|
HARDWARE
|
SSD
|
SSD {Component ID} smart attribute LIFE GAUGE exceeds a threshold. Value is {Value}.
|
SSD_LOG_PAGE_READING_FAILED
|
SSD CACHE
|
HARDWARE
|
SSD
|
SSD {Component ID} Failed reading log page {log}.
|
SSD_LONG_LATENCY
|
SSD CACHE
|
HARDWARE
|
SSD
|
Long latencies on ssd I/Os in the last 30 minutes on {SSD ID}, start LBA={Start LBA}, last LBA={Last LBA}, scsi_opcode={scsi_opcode}, latency={latency} ms.
|
SSD_MEDIUM_ERROR
|
SSD CACHE
|
HARDWARE
|
SSD
|
Media errors on {SSD ID}, start LBA={Start LBA}, last LBA={Last LBA}, latency={latency} ms.
|
SSD_NEAR_WEAROUT
|
SSD CACHE
|
HARDWARE
|
SSD
|
SSD {Component ID} has bad SMART status. Attribute {Attribute} ({Attribute}) has value of {Value}.
|
SSD_OFFLINE
|
SSD CACHE
|
HARDWARE
|
SSD
|
SSD {Component ID} was marked as offline due to error recovery failures.
|
SSD_PROBLEMATIC_BEHAVIOR_CLEARED
|
SSD CACHE
|
HARDWARE
|
NONE
|
{Component ID} no longer exhibits problematic behavior.
|
SSD_PROBLEMATIC_BEHAVIOR_DETECTED
|
SSD CACHE
|
HARDWARE
|
SSD
|
{Component ID} exhibits problematic behavior.
|
SSD_RECOVERED_ERROR
|
SSD CACHE
|
HARDWARE
|
NONE
|
SSD {SSD ID} autonomously recovered from an error successfully, start lba={first_lba}, last lba={last_lba}, scsi_opcode={scsi_opcode}, latency={latency} usec.
|
SSD_REQUEST_ERROR_INFORMATION
|
SSD CACHE
|
HARDWARE
|
SSD
|
SSD {SSD ID} had error: {Error Name}, latency={latency} ms.
|
SSD_RESET_DONE
|
SSD CACHE
|
SOFTWARE
|
NONE
|
Reset to disk {Component ID} was executed and succeeded. Reset duration {reset duration} usecs, IOs pending {IOs Pending}.
|
SSD_RESET_FAILURE
|
SSD CACHE
|
HARDWARE
|
SSD
|
Reset to disk {Component ID} was executed and failed. Reset duration {reset duration} usecs, IOs pending {IOs Pending}.
|
SSD_RESPONSIVE
|
SSD CACHE
|
SOFTWARE
|
NONE
|
SSD {SSD ID} is now responsive. Was unresponsive for {time} msecs
|
SSD_SECURE_ERASE_FAILED
|
SSD CACHE
|
HARDWARE
|
SSD
|
SSD {Component ID} secure erase failed.
|
SSD_SMALLER_THAN_EXPECTED
|
SSD CACHE
|
HARDWARE
|
SSD
|
Installed SSD {Component ID} has a size of {Size}GB which is smaller than the expected size of {Spec Size}GB.
|
SSD_SMALLER_THAN_SYSTEM_SSD_SIZE
|
SSD CACHE
|
HARDWARE
|
SSD
|
SSD {Component ID} has a size of {New size}GB which is smaller than system ssd size {System size}GB.
|
SSD_SMART_ATTRIBUTE_THRESHOLD
|
SSD CACHE
|
HARDWARE
|
SSD
|
SSD {Component ID} smart attribute {Smart attribute} ({Attribute}) exceeds a threshold. Value is {Value}.
|
SSD_SMART_READING_FAILED
|
SSD CACHE
|
HARDWARE
|
SSD
|
SSD {Component ID} failed reading smart attributes.
|
SSD_SPEED_HAS_CHANGED
|
SSD CACHE
|
HARDWARE
|
SSD
|
SSD {Component ID} speed has changed to {Speed} Gbps
|
SSD_UNRESPONSIVE
|
SSD CACHE
|
HARDWARE
|
SSD
|
SSD {SSD ID} is unresponsive for {time} msecs
|
SSD_WORN_OUT
|
SSD CACHE
|
HARDWARE
|
SSD
|
SSD {Component ID} has very bad SMART status and must be replaced. Attribute {Attribute} ({Attribute}) has value of {Value}.
|
SSH_REVOKE_KEY_FAILED
|
SECURITY
|
SOFTWARE
|
KEY
|
Failed to revoke authorized SSH key ending with '{Tail of Authorized SSH key}' for user '{Unix Account Name}' on module '{Component ID}'.
|
SSH_REVOKE_KEY_OK
|
SECURITY
|
SOFTWARE
|
NONE
|
Authorized SSH key ending with '{Tail of Authorized SSH key}' was successfully revoked for user '{Unix Account Name}' on all modules in the system.
|
START_WORK
|
SYSTEM
|
SOFTWARE
|
NONE
|
System has entered ON state.
|
STORAGE_POOL_EXHAUSTED
|
POOL
|
SOFTWARE
|
Pool Usage
|
Pool '{pool}' is full. All volumes are locked.
|
STORAGE_POOL_SNAPSHOT_USAGE_DECREASED
|
POOL
|
SOFTWARE
|
NONE
|
Usage by snapshots of Storage Pool with name '{pool.name}' has decreased to {current}%.
|
STORAGE_POOL_SNAPSHOT_USAGE_INCREASED
|
POOL
|
SOFTWARE
|
Pool Usage
|
Usage by snapshots of Storage Pool with name '{pool.name}' has reached {current}%.
|
STORAGE_POOL_UNLOCKED
|
POOL
|
SOFTWARE
|
NONE
|
Pool '{pool}' has empty space. All volumes are unlocked.
|
STORAGE_POOL_VOLUME_USAGE_BACK_TO_NORMAL
|
POOL
|
SOFTWARE
|
NONE
|
Usage by volumes of Storage Pool with name '{pool.name}' is back to normal with {current}% of the total pool size.
|
STORAGE_POOL_VOLUME_USAGE_DECREASED
|
POOL
|
SOFTWARE
|
NONE
|
Usage by volumes of Storage Pool with name '{pool.name}' has decreased to {current}%.
|
STORAGE_POOL_VOLUME_USAGE_INCREASED
|
POOL
|
SOFTWARE
|
Pool Usage
|
Usage by volumes of Storage Pool with name '{pool.name}' has reached {current}%.
|
STORAGE_POOL_VOLUME_USAGE_TOO_HIGH
|
POOL
|
SOFTWARE
|
Pool Usage
|
Usage by volumes of Storage Pool with name '{pool.name}' has reached {current}% of the total pool size.
|
SUSPICIOUS_PSU_INFORMATION
|
POWER
|
SOFTWARE
|
PSU
|
Suspicious information was found for {PSU}, this might happen after a PSU replacement. Some of the hardware sensors monitoring will be disabled until the module is power cycled.
|
SYSTEM_CAN_NOT_INCREASE_SPARES
|
SYSTEM
|
SOFTWARE
|
Free up Space
|
System's spares can not be increased to {modules} modules and {disks} disks. {Capacity }GB should be freed.
|
SYSTEM_DISK_CAPACITY_EXPANDED
|
SYSTEM
|
SOFTWARE
|
NONE
|
System's hard capacity is now {Capacity }GB.
|
SYSTEM_DISK_CAPACITY_LIMIT_PERCENTAGE_EXPANDED
|
SYSTEM
|
SOFTWARE
|
NONE
|
System's hard capacity was expanded to {Capacity limit Percentage }.
|
SYSTEM_ENTERED_CHARGING_STATE
|
POWER
|
SOFTWARE
|
UPS Charge Level
|
System cannot start work until it's uniterruptable power supplies sufficiently charged.
|
SYSTEM_HARD_CAPACITY_CHANGED
|
SYSTEM
|
SOFTWARE
|
NONE
|
System's hard capacity is now {Capacity }GB.
|
SYSTEM_HAS_ENTERED_MAINTENANCE_MODE
|
SYSTEM
|
SOFTWARE
|
Contact IBM
|
System has entered MAIntENANCE state [{Reason}]
|
SYSTEM_LEFT_CHARGING_STATE
|
POWER
|
SOFTWARE
|
NONE
|
System is sufficiently charged.
|
SYSTEM_NO_SPARES
|
SYSTEM
|
SOFTWARE
|
Contact IBM
|
System has no spare disks
|
SYSTEM_PREPARE_FOR_DELETE
|
SYSTEM
|
SOFTWARE
|
NONE
|
The event is generated when the user calls system_prepare_for_delete. The IBM Service Center uses this event in order to identify a system that is about to be deleted and not expected to send heartbeats any more.
|
SYSTEM_SOFT_CAPACITY_CHANGED
|
SYSTEM
|
SOFTWARE
|
NONE
|
System's soft capacity is now {Capacity }GB.
|
SYSTEM_SPARES_ARE_LOW
|
SYSTEM
|
SOFTWARE
|
Free up space
|
System capacity spares are {modules} modules and {disks} disks.
|
SYSTEM_TEMPERATURE_IS_CRITICALLY_HIGH
|
SYSTEM
|
ENVIRONMENT
|
Data Center Temperature
|
System temperature is {System Temperature}C, which is critically high. Shutting down the system.
|
SYSTEM_TEMPERATURE_IS_CRITICALLY_HIGH_SHUT_IT_DOWN_IMMEDIATELY
|
SYSTEM
|
ENVIRONMENT
|
Data Center Temperature
|
System temperature is {System Temperature}C - which is critically high - but automatic shutdown is disabled. You need to manually shut down the system immediately!
|
SYSTEM_TEMPERATURE_IS_CRITICALLY_HIGH_SHUTDOWN_IMMEDIATELY
|
SYSTEM
|
ENVIRONMENT
|
Data Center Temperature
|
System temperature is {System Temperature}C - which is critically high - but automatic shutdown is disabled. Shut down the system immediately!
|
SYSTEM_TEMPERATURE_IS_HIGH
|
SYSTEM
|
ENVIRONMENT
|
Data Center Temperature
|
System temperature is {System Temperature}C, which is high. It approaches the maximal allowable value.
|
SYSTEM_TEMPERATURE_IS_HIGH_AND_STABILIZING
|
SYSTEM
|
ENVIRONMENT
|
Data Center Temperature
|
System temperature is {System Temperature}C. It is stabilizing, but still close to the maximal allowable value.
|
SYSTEM_TEMPERATURE_IS_OK_NOW
|
SYSTEM
|
ENVIRONMENT
|
NONE
|
System temperature is {System Temperature}C, which is within allowed limits.
|
SYSTEM_TEMPERATURE_IS_TOO_HIGH
|
SYSTEM
|
ENVIRONMENT
|
Data Center Temperature
|
System temperature is {System Temperature}C, which is higher than the maximal allowable value. If the system doesn't cool down soon, it might be automatically shut down.
|
SYSTEM_TEMPERATURE_IS_TOO_HIGH_AND_STABILIZING
|
SYSTEM
|
ENVIRONMENT
|
Data Center Temperature
|
System temperature is {System Temperature}C. It is stabilizing, but is still higher than the maximal allowable value. If the system doesn't cool down soon, it might be automatically shut down.
|
SYSTEM_TEMPERATURE_IS_TOO_LOW
|
SYSTEM
|
ENVIRONMENT
|
Data Center Temperature
|
System temperature is {System Temperature}C, which is lower than the minimal allowable value.
|
SYSTEM_TEMPERATURE_RISES_SUSPICIOUSLY_FAST
|
SYSTEM
|
ENVIRONMENT
|
Data Center Temperature
|
System temperature ({System Temperature} C) is rising suspiciously fast (from {Previous Temperature}C). Check air conditioning system.
|
SYSTEM_TEMPERATURE_TOO_HIGH
|
SYSTEM
|
ENVIRONMENT
|
Data Center Temperature
|
System temperature is {System Temperature}C, which is higher than the maximal allowable value.
|
SYSTEM_TEMPERATURE_TOO_HIGH_AND_STABILIZING
|
SYSTEM
|
ENVIRONMENT
|
Data Center Temperature
|
System temperature is {System Temperature}C. It is stabilizing, but is still higher than the maximal allowable value.
|
SYSTEM_TEMPERATURE_TOO_HIGH_AND_STABILIZING_SHUTDOWN
|
SYSTEM
|
ENVIRONMENT
|
Data Center Temperature
|
System temperature is {System Temperature}C. It is stabilizing, but is still higher than the maximal allowable value. If the system doesn't cool down soon, it will be automatically shut down.
|
SYSTEM_TEMPERATURE_TOO_HIGH_SHUTDOWN
|
SYSTEM
|
ENVIRONMENT
|
Data Center Temperature
|
System temperature is {System Temperature}C, which is higher than the maximal allowable value. If the system doesn't cool down soon, it will be automatically shut down.
|
SYSTEM_USABLE_HARD_CAPACITY_LIMIT_RESET
|
SYSTEM
|
SOFTWARE
|
NONE
|
The event is generated when the user reset the system usable hard capacity and defaults it to be the total system hard capacity.
|
SYSTEM_USABLE_HARD_CAPACITY_LIMIT_SET
|
SYSTEM
|
SOFTWARE
|
NONE
|
The event is generated when the user sets a new value to system usable hard capacity
|
TARGET_ALLOW_ACCESS
|
TARGET
|
USER
|
NONE
|
Target '{target.name}' is allowed to access this machine.
|
TARGET_CLOCK_SKEW_ABOVE_LIMIT
|
TARGET
|
ENVIRONMENT
|
Adjust Target Clock
|
Target '{target.name}' has clock skew above the allowed limit relative to local machine.
|
TARGET_CLOCK_SKEW_RESOLVED
|
TARGET
|
ENVIRONMENT
|
NONE
|
Target named '{target.name}' clock skew has been resolved.
|
TARGET_CONNECTION_DISCONNECTED
|
TARGET
|
ENVIRONMENT
|
Target Connectivity / Network / Target
|
Target named '{target.name}' is no longer accessible through remote service {module_id}.
|
TARGET_CONNECTION_ESTABLISHED
|
TARGET
|
ENVIRONMENT
|
NONE
|
Target named '{target.name}' is accessible through remote service {module_id}.
|
TARGET_CONNECTIVITY_ACTIVATE
|
TARGET
|
USER
|
NONE
|
Connectivity between Port '{Connection Remote Port Address}' of target named '{Connection Target Name}' and {Local FC Port} was activated.
|
TARGET_CONNECTIVITY_CREATE
|
TARGET
|
USER
|
NONE
|
Port '{Connection Remote Port Address}' of target named '{Connection Target Name}' is connected to the system through {Local FC Port}.
|
TARGET_CONNECTIVITY_CREATE_FAILED_TOO_MANY
|
TARGET
|
USER
|
Target Connection Count
|
Port could not be connected to the system. You are attempting to define more connections than the system permits.
|
TARGET_CONNECTIVITY_DEACTIVATE
|
TARGET
|
USER
|
NONE
|
Connectivity between Port '{Connection Remote Port Address}' of target named '{Connection Target Name}' and {Local FC Port} was deactivated.
|
TARGET_CONNECTIVITY_DELETE
|
TARGET
|
USER
|
NONE
|
Port '{Connection Remote Port Address}' of target named '{Connection Target Name}' was disconnected from {Local FC Port}.
|
TARGET_DEFINE
|
TARGET
|
USER
|
NONE
|
Target was defined named '{target.name}'.
|
TARGET_DEFINE_FAILED_TOO_MANY
|
TARGET
|
SOFTWARE
|
Target Count
|
Target could not be defined. You are attempting to define more targets than the system permits.
|
TARGET_DELETE
|
TARGET
|
USER
|
NONE
|
Target named '{target.name}' was deleted.
|
TARGET_DISCONNECTED
|
TARGET
|
ENVIRONMENT
|
Target Connectivity / Network / Target
|
Target named '{target.name}' is no longer accessible through any gateway module.
|
TARGET_ISCSI_CONNECTIVITY_ACTIVATE
|
TARGET
|
USER
|
NONE
|
Connectivity between Port '{Connection Remote Port Address}' of target named '{Connection Target Name}' and ip Interface '{Local IP Interface}' was activated.
|
TARGET_ISCSI_CONNECTIVITY_CREATE
|
TARGET
|
USER
|
NONE
|
Port '{Connection Remote Port Address}' of target named '{Connection Target Name} is connected to the system through ip Interface '{Local IP Interface}'.
|
TARGET_ISCSI_CONNECTIVITY_DEACTIVATE
|
TARGET
|
USER
|
NONE
|
Connectivity between Port '{Connection Remote Port Address}' of target named '{Connection Target Name}' and ip Interface '{Local IP Interface}' was deactivated.
|
TARGET_ISCSI_CONNECTIVITY_DELETE
|
TARGET
|
USER
|
NONE
|
Port '{Connection Remote Port Address}' of target named '{Connection Target Name}' was disconnected from ip Interface '{Local IP Interface}'.
|
TARGET_LINK_DOWN_BEYOND_THRESHOLD
|
TARGET
|
ENVIRONMENT
|
Target Connectivity / Network / Target
|
Target named '{target.name}' is not accessible for a long time.
|
TARGET_PORT_ACTIVATE
|
TARGET
|
USER
|
NONE
|
Port '{port_name}' in target named '{target.name}' was activated.
|
TARGET_PORT_ADD
|
TARGET
|
USER
|
NONE
|
Port '{port_name}' was added to target named '{target.name}'.
|
TARGET_PORT_DEACTIVATE
|
TARGET
|
USER
|
NONE
|
Port '{port_name}' was deactivated in target named '{target.name}'.
|
TARGET_PORT_REMOVE
|
TARGET
|
USER
|
NONE
|
Port '{port_name}' was removed from target named '{target.name}'.
|
TARGET_RENAME
|
TARGET
|
USER
|
NONE
|
Target named '{old_name}' was renamed '{target.name}'.
|
TEST_EMAIL_HAS_FAILED
|
PROACTIVE SUPPORT / ALERTS
|
ENVIRONMENT
|
SMTP Gateway / Network
|
Sending test to {Destination Name} via {SMTP Gateway} failed. Module: {Module ID}; Error message: '{Error Message}'; timeout expired: {Timeout Expired?}.
|
TEST_SMS_HAS_FAILED
|
PROACTIVE SUPPORT / ALERTS
|
ENVIRONMENT
|
SMS Gateway / Network
|
Sending test to {Destination Name} via {SMS Gateway} and {SMTP Gateway} failed. Module: {Module ID}; Error message: '{Error Message}'; timeout expired: {Timeout Expired?}.
|
TIMEZONE_SET
|
SYSTEM
|
USER
|
NONE
|
Timezone of the system was set to {Timezone}.
|
UNABLE_TO_CONNECT_TO_REMOTE_SUPPORT
|
REMOTE SUPPORT
|
ENVIRONMENT
|
Network / ip 195.110.41.141/2 port 22 / Firewall / Module
|
System is unable to connect to any remote support center.
|
UNMAP_VOLUME
|
HOST
|
USER
|
NONE
|
Volume with name '{volume.name}' was unmapped from {host_or_cluster} with name '{host}'.
|
UPGRADE_DOWNLOAD_REPOSITORY_COPY
|
HOT UPGRADE
|
SOFTWARE
|
|
Mirroring needed files from repository failed. Mirroring module is {mirroring_module} error is {error}
|
UPGRADE_FILE_LIST_RETRIEVAL_FAILED
|
HOT UPGRADE
|
SOFTWARE
|
|
Could not receive new version's file list from repository. Error code is {error}.
|
UPGRADE_IS_ALLOWED
|
HOT UPGRADE
|
SOFTWARE
|
NONE
|
All of the pre-upgrade validations passed successfully.
|
UPGRADE_IS_NOT_ALLOWED
|
HOT UPGRADE
|
SOFTWARE
|
Contact IBM
|
One or more of the pre-upgrade validations failed.
|
UPGRADE_IS_OVER
|
HOT UPGRADE
|
SOFTWARE
|
NONE
|
System went up after an upgrade.
|
UPGRADE_LOCAL_VERSION_DOWNLOAD_FAILED
|
HOT UPGRADE
|
SOFTWARE
|
Contact IBM
|
Failure to distribute new sofware Internally. Error code is {error} .
|
UPGRADE_NO_NEW_FILES_FOR_UPGRADE
|
HOT UPGRADE
|
SOFTWARE
|
Contact IBM
|
Repository version does not contain any new files. current version {current_version} new version is {new_version}
|
UPGRADE_SOFTWARE_DOWNLOAD_FINISHED
|
HOT UPGRADE
|
SOFTWARE
|
NONE
|
Finished downloading software needed for upgrade to version {version}. Upgrade consequence is {consequence}
|
UPGRADE_STARTS
|
HOT UPGRADE
|
SOFTWARE
|
NONE
|
System starting an upgrade.
|
UPGRADE_WAS_CANCELLED
|
HOT UPGRADE
|
USER
|
NONE
|
Upgrade was cancelled with reason {reason} .
|
USER_ADDED_TO_DOMAIN
|
USER ACCESS
|
USER
|
NONE
|
User {User Name} was added to domain {Domain Name} ({Exclusive}).
|
USER_ADDED_TO_USER_GROUP
|
USER ACCESS
|
USER
|
NONE
|
User '{User Name}' was added to user group '{User Group Name}'.
|
USER_DEFINED
|
USER ACCESS
|
USER
|
NONE
|
A user with name '{Name}' and category {Category} was defined.
|
USER_DELETED
|
USER ACCESS
|
USER
|
NONE
|
A user with name '{Name}' and category {Category} was deleted.
|
USER_GROUP_CREATED
|
USER ACCESS
|
USER
|
NONE
|
A user group with name '{Name}' was created.
|
USER_GROUP_DELETED
|
USER ACCESS
|
USER
|
NONE
|
A user group with name '{Name}' was deleted.
|
USER_GROUP_RENAMED
|
USER ACCESS
|
USER
|
NONE
|
User group with name '{Old Name}' was renamed '{New Name}'.
|
USER_HAS_FAILED_TO_RUN_COMMAND
|
USER ACCESS
|
ENVIRONMENT
|
USER Credentials / Authorizations
|
User '{User Name}' from IP '{Client Address}' failed authentication when trying to run command '{Command Line}'.
|
USER_LOGIN_HAS_FAILED
|
USER ACCESS
|
ENVIRONMENT
|
USER Credentials / Authorizations
|
User '{User Name}' from IP '{Client Address}' failed logging Into the system.
|
USER_LOGIN_HAS_SUCCEEDED
|
USER ACCESS
|
SOFTWARE
|
NONE
|
User '{User Name}' from IP '{Client Address}' successfully logged Into the system.
|
USER_REMOVED_FROM_DOMAIN
|
USER ACCESS
|
USER
|
NONE
|
User {User Name} was removed from domain {Domain Name}.
|
USER_REMOVED_FROM_USER_GROUP
|
USER ACCESS
|
USER
|
NONE
|
User '{User Name}' was removed from user group '{User Group Name}'.
|
USER_RENAMED
|
USER ACCESS
|
USER
|
NONE
|
User with name '{Old Name}' was renamed '{New Name}'.
|
USER_SHUTDOWN
|
SYSTEM
|
USER
|
NONE
|
System is shutting down due to a user request.
|
USER_UPDATED
|
USER ACCESS
|
USER
|
NONE
|
User with name '{Name}' was updated.
|
VOL_CLEAR_EXTERNAL_ID
|
VOLUME
|
USER
|
NONE
|
Volume with name '{volume.name}' cleared the External identifier.
|
VOL_SET_EXTERNAL_ID
|
VOLUME
|
USER
|
NONE
|
Volume with name '{volume.name}' changed the External identifier to '{volume.identifier}'.
|
VOLUME_COPY
|
VOLUME
|
USER
|
NONE
|
Volume with name '{source.name}' was copied to volume with name '{target.name}'.
|
VOLUME_COPY_DIFF
|
VOLUME
|
USER
|
NONE
|
Volume with name '{source.name}' was diff-copied from base '{base.name}' to volume with name '{target.name}'.
|
VOLUME_CREATE
|
VOLUME
|
USER
|
NONE
|
Volume was created with name '{volume.name}' and size {volume.size}GB in Storage Pool with name '{volume.pool_name}'.
|
VOLUME_CREATE_FAILED_BAD_SIZE
|
VOLUME
|
SOFTWARE
|
Volume Size
|
Volume with name '{name}' could not be created with size of {requested_size}GB. Volume size is not a multiple of the volume size quanta ({SLICE_MAX_NUMBER} Partitions).
|
VOLUME_CREATE_FAILED_TOO_MANY
|
VOLUME
|
SOFTWARE
|
Volumes Count
|
Volume with name '{name}' could not be created. You are attempting to add more volumes than the system permits.
|
VOLUME_CREATE_MANY
|
VOLUME
|
USER
|
NONE
|
{number} Volumes was created with names: '{names}' in Storage Pool with name '{pool.name}'.
|
VOLUME_DELETE
|
VOLUME
|
USER
|
NONE
|
Volume with name '{volume.name}' was deleted.
|
VOLUME_FORMAT
|
VOLUME
|
USER
|
NONE
|
Volume with name '{volume.name}' was formatted.
|
VOLUME_LOCK
|
VOLUME
|
USER
|
NONE
|
Volume with name '{volume.name}' was locked and set to 'read-only'.
|
VOLUME_MODIFIED_DURING_IO_PAUSE
|
Consistency Group
|
SOFTWARE
|
Consistency Group
|
Volume '{vol_name}' of CG '{cg_name}' was modified during Pause IO with token '{token}'
|
VOLUME_MOVE
|
VOLUME
|
USER
|
NONE
|
Volume with name '{volume.name}' has been moved from Storage Pool '{orig_pool.name}' to Pool '{pool.name}'.
|
VOLUME_RENAME
|
VOLUME
|
USER
|
NONE
|
Volume with name '{old_name}' and was renamed '{volume.name}'.
|
VOLUME_RESIZE
|
VOLUME
|
USER
|
NONE
|
Volume with name '{volume.name}' was resized from {old_size}GB to {volume.size}GB.
|
VOLUME_SET_ALL_SSD_CACHING
|
VOLUME
|
USER
|
NONE
|
SSD Caching was set to be '{state}' for all currently defined Volumes.
|
VOLUME_SET_DEFAULT_SSD_CACHING
|
VOLUME
|
USER
|
NONE
|
Default SSD Caching for volumes was set to be '{state}'.
|
VOLUME_SET_FLASH_BYPASS
|
VOLUME
|
USER
|
NONE
|
Flash Cache Bypass was set to be '{Bypass}' for Volume with name '{volume.name}'.
|
VOLUME_SET_SSD_CACHING
|
VOLUME
|
USER
|
NONE
|
SSD Caching was set to be '{state}' for Volume with name '{volume.name}'.
|
VOLUME_UNLOCK
|
VOLUME
|
USER
|
NONE
|
Volume with name '{volume.name}' was unlocked and set to 'writable'.
|
XCG_ADD_CG
|
CROSS CONSISTENCY GROUP
|
USER
|
NONE
|
CG with name '{cg.name}' was added to Cross Consistency Group with name '{xcg}'.
|
XCG_CREATE
|
CROSS CONSISTENCY GROUP
|
USER
|
NONE
|
Cross Consistency Group with name '{xcg}' was created.
|
XCG_DELETE
|
CROSS CONSISTENCY GROUP
|
USER
|
NONE
|
Cross Consistency Group with name '{xcg}' was deleted.
|
XCG_REMOVE_CG
|
CROSS CONSISTENCY GROUP
|
USER
|
NONE
|
CG with name '{cg.name}' was removed from Cross Consistency Group with name '{xcg}'.
|
XIV_SUPPORT_DISABLED
|
REMOTE SUPPORT
|
USER
|
NONE
|
XIV support access is disabled.
|
XIV_SUPPORT_ENABLED
|
REMOTE SUPPORT
|
USER
|
NONE
|
XIV support access from {From} is enabled from {Start Time} until {Finish Time}. Comment: {Comment}.
|
XIV_SUPPORT_ENABLED_NO_TIME_LIMIT
|
REMOTE SUPPORT
|
USER
|
NONE
|
XIV support access from {From} is enabled from {Start Time} until explicitly disabled. Comment: {Comment}.
|
XIV_SUPPORT_WINDOW_TIMEOUT
|
REMOTE SUPPORT
|
SOFTWARE
|
Support Window Timeout
|
XIV support work window timeout is expired.
|
18.223.170.223