- About Vivit
- LUGs & SIGs
- Vivit Blogs
- News & Events
- Knowledge Base
|HPE Software Products: Operations Analytics|
I created shares for the csv files and mount to /opt/HP/OV/nnm on the collector. I see the cvs files. but nothing is showing up in the dashboard. please help.
NNMi custom polling is configured.
root@hpsatvld5350:/opt/HP/opsa/data/nnm # ls
Operation Analytics trying to integrate Arcsight Logger error
Logger registration validation completed successfully but connection to logger web service failed: org.apache.cxf.binding.soap.SoapFault: unknown
All, I am currently trying to find the OpsA smart Connectors ... please help, the current location is not available
All OpsA Smart Connectors are available on HP Live Network at:
2016-11-04 09:19:47,484 ERROR [com.hp.opsa.dataaccess.logger.impl.LoggerConnectionManager.loadConnectionInfoFromDB:88] (http-/0.0.0.0:8080-1) [DALClient-0009] No raw log source found for the tenant opsa_default
2016-11-04 09:19:47,485 ERROR [com.hp.opsa.dataaccess.logger.impl.LogDataAPIImpl.getRawLogSourceType:622] (http-/0.0.0.0:8080-1) [DALClient-0009] Can't recognize RowLog source file for tenant , probably no RowLog source been configured opsa_default
We have a working metric integration for NewRelic and BSMC 10.01 (with special charcter hotfix). The metrics are pulled from NewRelic using the DoItWise Integration Hub and then they are sent to BSMC. Now we are trying to set up an OpsA collection to pull these in similar manner to the OpsA<>BSMC integrations for SCOM and Nagios.
I checked the BSMC databases, CODA object output and created a collection XML using the SCOM collection as a template. The DB structure in BSMC for NewRelic is a bit different compared to SCOM though. With SCOM we have one class per metric:
Data source: BsmIntSCOMMetrics
LogicalDisk:% Free Space NON R64 % Free Space
which translates as:
BsmIntSCOMMetrics.LogicalDisk:% Free Space.% Free Space
With NewRelic we have one datasource, two classes and many metrics in each one. For example:
Data source: NewRelic
Server NON R64 System_CPU_System_percent
Server NON R64 System_Disk_All_Utilization_percent
It goes like:
I modified the XML accordingly and data is being collected and processed. However, the actual metrics are not collected in a proper manner. Using the metrics mentioned above, here is an example:
Metrics collected and processed in OpsA:
The collection that OpsA performs runs every 5 minutes. The NewRelic metrics in BSMC are collected each minute. As it seems, OpsA collects metrics at random manner for those 5 minutes since the last collection. For example it takes one timestamp from the 5 minutes timeframe and collects the other metrics at random during the same time frame. In the second example above (Metrics collected and processed in OpsA) the 'System_CPU_System_percent' sample is collected at 1478245925 not 1478245985, while the 'System_Disk_All_Utilization_percent' 1478245985.
I tried changing the key attributes in the XML (default is RelatedCi as per the BSMC object output) but that did not help. I tried to use RelatedCi and CollectionTime, but that also did not work.
Does anyone have a suggestion on how to proceed?
all , I am try to add the Arcsight logger and I receive Error at publish operation. please help opsa 2.31 and Logger 6
We fed events from OMi to OpsA back on 9/21/16. We liked a specific message group and added a key word to increase the significance of specific events so that they would be identified as significant to help identify an issue that occurred between 12p-3p.
That worked back on 9/21, and continued to work through at least mid-October (the last time we went back and looked at that time period).
Today (11/2) we looked at it again, and the log and event analytics pane show ZERO significant events for the time period 12p-3p on 9/21.
Has anyone seen this behavior? Any ideas regarding why these events are no longer significant?
I'm trying to pull HPOO and HPCSA logs. what is best way pull into the collector
I am trying to correlate metrics within Opsa.
The metric correlation can be done within the same collection. For example cpu and memory metrics can be correlated since they are in the "oa_sysperf_global collection ".
Is it possible to correlate metrics with different collections? For example cpu metric and server response time metric from RUM can be correlated and how?
Thanks for your helps,
Good day, hope that someone can assist. I am trying to install OPSA on a Linux VM and the IP Check keeps failing---the installation log shows an error (same as displayed in the installation wizard) make sure that there is only 1 IPv4 address and that this is not a loopback address.. otherwise OpsA postinstall will be corrupted... I cannot click on next to proceed from there. Tried setting the network adaptor settings: bridged, NAT and Host-only with no luck--any ideas?
I wanted one clarification . ..
Can Operations Analytics be installed on a single machine? I mean . . Server, collector, Logger and vertica
all in one machine ? I am talking about the trial version. . .
Hi . . .
I need one information. . .
In OpsA 2.31 installation document, it is given as in attached doc.
When i tried to download the software, i got one single zip file of 7.2 GB with the description :-
Is this the same combined with all 4 zip files? or is this different one?
As far as i know OpsA uses automatically configured Flex connector for self monitoring.
However for integration it asks serial number of the logger host. How can i get the serial number of logger host?
To view Operations Analytics collector logs, you need to run opsa-flex-config.sh on each Operations Analytics Collector host and perform the following steps from the command line:
* Enter the serial number of the Logger host for which you want to configure the Operations Analytics Log File Connector for HP Arcsight Logger.
We woudl like to find out
We currently have OPSA 2.31 Integrated into ArcSight Logger
WIndows events are being populated, however when we look at the arcsight_log_stream table in Vertica it seems its only populating some of the 130 fields from logger
Has anyone had the same issue, as I can see there are some config files that I can possibly edit however would like to see if anyone else has done it before while I am awaiting an answer from HPE Support
I have tried to get information in Log and Event Analytics - Most Significant Messages, the panel is empty.
Information is: com.hp.opsa.aql.execution.error.AQLServiceException: Data could not be retrieved. Try checking the opsa-task-manager logs on the Operations Analytics Appliance Server. There are messages in opsa-task-manager-service.log:
WARNING: Couldn't flush system prefs: java.util.prefs.BackingStoreException: /etc/.java/.systemPrefs/com create failed.
Messages are comming from Splunk to OpsA.
Any idea why is this?
As the topic name provides, I am interested in using saved selections for the OpsA dashboards, while still finetuning the whole query.
To describe my problem:
Is there a way or workaround to keep those selections saved, or is it a limitation?
Thanks for any input.
My customer wants to change the session timeout value from the default (20 minutes) to a longer period, since he has integrated the OpsA-dashboard with OMi and is expiriencing some strange login redirecting issues to the OpsA-Server when the session is expired.
Is there may an option in a configuration file to change the default session timeout value to a custom one?
Thank you in advance and best Regards,
I'm facing errors with OpsA UI, where some of the components are not working at all, like creation of new dashboard etc.
I searched the logs and found that in /opt/HP/opsa/log/opsa-task-manager/opsa-task-manager.log,
ther are many entries of the following message :
ERROR - ClusterManager: Error managing cluster: Failed to obtain DB connection from data source 'dataSource': java.sql.SQLException: Failed to get connection for Quartz Connection Provider[Vertica][VJDBC](100176) Failed to connect to host <vertica_ip> on port 5433. Reason: Failed to establish a connection to the primary server or any backup address.
where <vertica_ip> is the private IP used by the Vertica node for their private connections.
I din't give this IP during the postinstall script, but still somehow OpsA is using it to setup connection.
Kindly advise, how do I resolve this.
I have a console of full OpsA that already is recollecting logs from windows servers and from a OneView console, but I need repeat the process of integration of Oneview with Full Opsa but when I tried repeat this process (same OneView console), I get the next message:
"Specified collector host is not a registered collector"
My doubt is what I need for do for repeat the process of integration of same Oneview console to full OpsA, because this console is for show our customer this process and for monitoring our servers.