HPE Software Products: Operations Analytics
Share |

NNMi data not showing up in OpsAOpen in a New Window

 

 

I created shares for the csv files and mount to /opt/HP/OV/nnm  on the collector.  I see the cvs files. but nothing is showing up in the dashboard. please help.

 

NNMi custom polling is configured.

 

root@hpsatvld5350:/opt/HP/opsa/data/nnm # ls

f_Hour_InterfaceMetrics_20161115130000_001.csv.gz f_Raw_ComponentMetrics_20161115141510_001.csv.gz

f_Raw_ComponentMetrics_20161115133507_001.csv.gz   f_Raw_ComponentMetrics_20161115142010_001.csv.gz

f_Raw_ComponentMetrics_20161115134020_001.csv.gz   f_Raw_ComponentMetrics_20161115142510_001.csv.gz

f_Raw_ComponentMetrics_20161115134510_001.csv.gz   f_Raw_ComponentMetrics_20161115143011_001.csv.gz

f_Raw_ComponentMetrics_20161115135011_001.csv.gz   f_Raw_ComponentMetrics_20161115143512_001.csv.gz

f_Raw_ComponentMetrics_20161115135510_001.csv.gz   f_Raw_ComponentMetrics_20161115144007_001.csv.gz

f_Raw_ComponentMetrics_20161115140010_001.csv.gz   f_Raw_ComponentMetrics_20161115144506_001.csv.gz

f_Raw_ComponentMetrics_20161115140511_001.csv.gz   f_Raw_ComponentMetrics_20161115145010_001.csv.gz

f_Raw_ComponentMetrics_20161115141005_001.csv.gz   f_Raw_ComponentMetrics_20161115145509_001.csv.gz

root@hpsatvld5350:/opt/HP/opsa/data/nnm #

 

Operation Analytics trying to integrate Arcsight Logger errorOpen in a New Window

Operation Analytics trying to integrate Arcsight Logger  error

error message

Logger registration validation completed successfully but connection to logger web service failed: org.apache.cxf.binding.soap.SoapFault: unknown

 

OpsA smartConnectorsOpen in a New Window

All, I am currently trying to find the OpsA smart Connectors ... please help, the current location is not available

 

All OpsA Smart Connectors are available on HP Live Network at:

 

https://hpln.hp.com/contentoffering/smart-connectors-operations-analytics-and-operations-log-intelligence

 

Opsa trying to integration Logger No raw log source found for the tenant ops_defaultOpen in a New Window

2016-11-04 09:19:47,484 ERROR [com.hp.opsa.dataaccess.logger.impl.LoggerConnectionManager.loadConnectionInfoFromDB:88] (http-/0.0.0.0:8080-1) [DALClient-0009] No raw log source found for the tenant opsa_default

2016-11-04 09:19:47,485 ERROR [com.hp.opsa.dataaccess.logger.impl.LogDataAPIImpl.getRawLogSourceType:622] (http-/0.0.0.0:8080-1) [DALClient-0009] Can't recognize RowLog source file for tenant , probably no RowLog source been configured opsa_default

 

Issues with setting up a metric collection for NewRelic in OpsAOpen in a New Window

Hi all,

 

We have a working metric integration for NewRelic and BSMC 10.01 (with special charcter hotfix). The metrics are pulled from NewRelic using the DoItWise Integration Hub and then they are sent to BSMC. Now we are trying to set up an OpsA collection to pull these in similar manner to the OpsA<>BSMC integrations for SCOM and Nagios. 

I checked the BSMC databases, CODA object output and created a collection XML using the SCOM collection as a template. The DB structure in BSMC for NewRelic is a bit different compared to SCOM though. With SCOM we have one class per metric:

<datasource>.<classname>.<metricname>

For example:

Data source: BsmIntSCOMMetrics

LogicalDisk:% Free Space NON R64 % Free Space

which translates as:

BsmIntSCOMMetrics.LogicalDisk:% Free Space.% Free Space

 

With NewRelic we have one datasource, two classes and many metrics in each one. For example:

Data source: NewRelic

Server NON R64 System_CPU_System_percent

Server NON R64 System_Disk_All_Utilization_percent

It goes like:

NewRelic.Server.System_CPU_System_percent

NewRelic.Server.System_Disk_All_Utilization_percent

 

I modified the XML accordingly and data is being collected and processed. However, the actual metrics are not collected in a proper manner. Using the metrics mentioned above, here is an example:

BSMC DB:

CollectionTime: 1478245985

System_CPU_System_percent: 55.31

System_Disk_All_Utilization_percent: 33.21

 

Metrics collected and processed in OpsA:

CollectionTime: 1478245985

System_CPU_System_percent: 13.01

System_Disk_All_Utilization_percent: 33.21

 

The collection that OpsA performs runs every 5 minutes. The NewRelic metrics in BSMC are collected each minute. As it seems, OpsA collects metrics at random manner for those 5 minutes since the last collection. For example it takes one timestamp from the 5 minutes timeframe and collects the other metrics at random during the same time frame. In the second example above (Metrics collected and processed in OpsA) the 'System_CPU_System_percent' sample is collected at 1478245925 not 1478245985, while the 'System_Disk_All_Utilization_percent' 1478245985.

 

I tried changing the key attributes in the XML (default is RelatedCi as per the BSMC object output) but that did not help. I tried to use RelatedCi and CollectionTime, but that also did not work.

Does anyone have a suggestion on how to proceed?

 

Thanks,

Alex

 

 

OpsA integration with Arcsight Logger , Error at publish operationOpen in a New Window

all , I am try to add the Arcsight logger and I receive   Error at publish operation.  please help  opsa 2.31 and Logger 6

 

 

Events from an earlier time period (were, but are now) no longer significantOpen in a New Window

We fed events from OMi to OpsA back on 9/21/16.  We liked a specific message group and added a key word to increase the significance of specific events so that they would be identified as significant to help identify an issue that occurred between 12p-3p.

That worked back on 9/21, and continued to work through at least mid-October (the last time we went back and looked at that time period).  

Today (11/2) we looked at it again, and the log and event analytics pane show ZERO significant events for the time period 12p-3p on 9/21.

Has anyone seen this behavior?  Any ideas regarding why these events are no longer significant?

Thanks

 

OpsA SmartConnectors to communicate with HPOO and HPCSAOpen in a New Window

I'm trying to pull HPOO and HPCSA logs.  what is best way pull into the collector

 

opsa default url loginOpen in a New Window

please help. what is the default url for logging into opsa servers and collector. I have configure opsa server, vertica, and collector to talk. now what is the default url to login to server and collector.

 

Opsa Metric CorrelationOpen in a New Window

Hi there,

I am trying to correlate metrics within Opsa.

The metric correlation can be done within the same collection. For example cpu and memory metrics can be correlated since they are in the "oa_sysperf_global collection ".

Is it possible to correlate metrics with different collections? For example cpu metric and server response time metric from RUM can be correlated and how?

Thanks for your helps,

Burak

 

 

 

OPSA Server installation fails on IP CheckOpen in a New Window

Good day, hope that someone can assist. I am trying to install OPSA on a Linux VM and the IP Check keeps failing---the installation log shows an error (same as displayed in the installation wizard) make sure that there is only 1 IPv4 address and that this is not a loopback address.. otherwise OpsA postinstall will be corrupted... I cannot click on next to proceed from there. Tried setting the network adaptor settings: bridged, NAT and Host-only with no luck--any ideas?

 

Operations Analytics InstallationOpen in a New Window

Hello Everyone,

I wanted one clarification . .. 

Can Operations Analytics be installed on a single machine? I mean . . Server, collector, Logger and vertica

all in one machine ? I am talking about the trial version. . . 

 

Operation AnalyticsOpen in a New Window

Hi . . .

I need one information. . .

In OpsA 2.31 installation document, it is given as in attached doc.

When i tried to download the software, i got one single zip file of 7.2 GB with the description :-

"HP_Operations_Analytics_for_HP_OneView_Sept_2015_Z7550-96174.zip".

Is this the same combined with all 4 zip files? or is this different one?

 

 

 

Opsa self monitoringOpen in a New Window

Hi people,

As far as i know OpsA uses automatically configured Flex connector for self monitoring.

However for integration it asks  serial number of the logger host. How can  i get the serial number of logger host?

 

To view Operations Analytics collector logs, you need to run opsa-flex-config.sh on each Operations Analytics Collector host and perform the following steps from the command line:

  1. Review the list of Logger hosts already configured for the opsa_default tenant.

  2. Enter the serial number of the Logger host for which you want to configure the Operations Analytics Log File Connector for HP Arcsight Logger.

* Enter the serial number of the Logger host for which you want to configure the Operations Analytics Log File Connector for HP Arcsight Logger.

 

HP OPSA - ArcSight Logger integration IssueOpen in a New Window

Good day

 

We woudl like to find out

 

We currently have OPSA 2.31 Integrated into ArcSight Logger

WIndows events are being populated, however when we look at the arcsight_log_stream table in Vertica it seems its only populating some of the 130 fields from logger

Has anyone had the same issue, as I can see there are some config files that I can possibly edit however would like to see if anyone else has done it before while I am awaiting an answer from HPE Support

 

Thanks

 

The pane don't show - Most Significant MessagesOpen in a New Window

Hi,

I have tried to get information in Log and Event Analytics - Most Significant Messages, the panel is empty. 

Information is: com.hp.opsa.aql.execution.error.AQLServiceException: Data could not be retrieved. Try checking the opsa-task-manager logs on the Operations Analytics Appliance Server. There are messages in opsa-task-manager-service.log:

WARNING: Couldn't flush system prefs: java.util.prefs.BackingStoreException: /etc/.java/.systemPrefs/com create failed.
Jan 15, 2016 4:43:01 PM java.util.prefs.FileSystemPreferences syncWorld
WARNING: Couldn't flush system prefs: java.util.prefs.BackingStoreException: /etc/.java/.systemPrefs/com create failed.
Jan 15, 2016 4:43:31 PM java.util.prefs.FileSystemPreferences syncWorld
WARNING: Couldn't flush system prefs: java.util.prefs.BackingStoreException: /etc/.java/.systemPrefs/com create failed.

Messages are comming from Splunk to OpsA.

Any idea why is this?

 

Prevent loosing all selections after redefine time-rangeOpen in a New Window

Hi Everyone

As the topic name provides, I am interested in using saved selections for the OpsA dashboards, while still finetuning the whole query.

To describe my problem:
I am using a dashboard and clicking some additional metrics to a simple line chart. I now figure out, that there are some unconditional issues at a specific time during about 30 mins. Now, I wanna zoom into this timerange, keeping the selected metrics. Unfortunately this won't work for me with OpsA 2.31. All selections will be reset when defining the new timeframe and I have to reselect of the metrics.

Is there a way or workaround to keep those selections saved, or is it a limitation?

Thanks for any input.

Cheers

 

HowTo customize the OpsA-Server Session Timeout?Open in a New Window

Hi Folks

My customer wants to change the session timeout value from the default (20 minutes) to a longer period, since he has integrated the OpsA-dashboard with OMi and is expiriencing some strange login redirecting issues to the OpsA-Server when the session is expired.

Is there may an option in a configuration file to change the default session timeout value to a custom one?

Thank you in advance and best Regards,

Simon

 

Error with OpsA UIOpen in a New Window

Hi, 

 

I'm facing errors with OpsA UI, where some of the components are not working at all, like creation of new dashboard etc. 

 

I searched the logs and found that in /opt/HP/opsa/log/opsa-task-manager/opsa-task-manager.log, 

 

ther are many entries of the following message : 

 

ERROR - ClusterManager: Error managing cluster: Failed to obtain DB connection from data source 'dataSource': java.sql.SQLException: Failed to get connection for  Quartz Connection Provider[Vertica][VJDBC](100176) Failed to connect to host <vertica_ip> on port 5433. Reason: Failed to establish a connection to the primary server or any backup address.

 

where <vertica_ip> is the private IP used by the Vertica node for their private connections. 

 

I din't give this IP during the postinstall script, but still somehow OpsA is using it to setup connection.

 

Kindly advise, how do I resolve this.

 

 

Integration OneView with full OpsAOpen in a New Window

Hi,

 

     I have a console of full OpsA that  already is recollecting logs from windows servers and from a OneView console, but I need repeat the process of integration of Oneview with Full Opsa but when I tried repeat this process (same OneView console), I get the next message:

 

   "Specified collector host is not a registered collector"

 

   My doubt is what I need for do for repeat the process of integration of same Oneview console to full  OpsA, because this console is for show our customer this process and for monitoring our servers. 

 

Thanks,

     

 

   

 

    

Contact Us

Vivit Worldwide
P.O. Box 18510
Boulder, CO 80308

Email: info@vivit-worldwide.org

Mission

Vivit's mission is to serve
the Hewlett Packard
Enterprise User
Community through
Advocacy, Community,
and Education.