- About Vivit
- LUGs & SIGs
- Vivit Blogs
- News & Events
- Knowledge Base
|HPE Software Products: Cloud Optimizer|
we are using CO 3.01 and get an error in the discovery script:
0: ERR: Wed Oct 5 15:30:05 2016: agtrep (23659/140238358066944): (agtrep-132) Error in executing discovery action(s): WARNING: Policy action "/opt/OV/nonOV/python/bin/python /var/opt/OV/bin/instrumentation/service-discovery.py services" returned with non-zero exit value. Output of action is ignored.
0: ERR: Wed Oct 5 15:30:05 2016: agtrep (23659/140238358066944): (agtrep-133) No output received from discovery policy action
I have seen that the script contains a variable with name rp_rp_inst so I guess it is a typo. So I am wondering if this is a know defect or where this is coming from?
Can somebody advise on this?
can someone help me know how to change the threshold to monitor datastore utilization as i cound not find any configuration for datastore in the policy vPV Custom Alert Sensitivity Definition.
Would anyone know how to organize VMs by groups in Cloud Optimizer?
is there any way to clear hostname entry from "Status Of Datasources " as this creating false status.
The 'Download Collector ' link is not responding while i try to download datasource for hyper v.
Kindly find attachment
Server OS: Cent OS 6.4
Client : Windows server 2008
Client browser: Chrome, ie10
we are using CO 3.01 (with hotfix) to monitor vSphere environment and Omi 10.12 IP. The vSphere has around 450 vms. I can see the vms populated to OMi. All guests have the "is data collector for" link correctly set. But only a few of the Guests have the "Execution environment" link to the hypervisor set. So the vPV_Infrastructure view is mostly empty (or missing 95% of the Guests).
- Why is the link from the guest to the hyperviso missing?
I am integration CO 3.01 and OMi 10.12 IP1.
the following error occurs:
I have a few questions regarding the default "changeit" passwords on the tomcat.keystore, jssecacerts and also the default vertica password. I would greatly appreciate if someone like Ramki could give me some insight on this.
First off, I'd like to suggest that you consider documenting this for the next CO release, as we see it as a major security leak.
Now to the point, my questions are:
1. If CO is set to run under LDAP/LDAPS security, and the keystore used is called jssecacerts, then cacerts is not consulted as explained here? If so, I think this should be documented to clear any confusions.
2. After quite the time, we've managed to change the tomcat.keystore password, we did it by using keytool -storepasswd -alias ovtomcatb -keystore /var/opt/OV/certificates/tomcat/b/tomcat.keystore and then, since CO was failing to load, we had to edit the /opt/OV/nonOV/tomcat/b/conf/server.xml.ovtemplate file by adding keystoreFile="@TOMCAT_KEYSTORE_FILE@" keystorePass="ournewpassword" parameters to both connectors in that file. Is this the proper way to do this and if so, this should probably be documented as well. If not, please explain what the proper way is?
3. Following up on 1. , if we are using LDAP authenication and we have the jssecacerts keystore, I would assume the keystore password should be changed by using the same command, but what files need to be modified in order for it to be connected to CO?
4. Finally, I'd like to directly refer to the CO 3.0 installation guide, on page 51 of this document, it is stated that changing the default CO password for the Vertica database is recommended, but the method to do it is not specified? I know you will most likely refer me to some guide on the Vertica website, but please explain how to do this, and why would this be just hinted, but not explained?
If anyone can answer me on the above questions, your input will be much appreciated.
Thank you in advance!
we are using CO for Vsphere. We have connected it and it says "successfull". But after a while the connection fails with an unspecified error. We have upgraded to CO 3.01 but the error is the same.
WARN [2016-10-20 14:27:59,442] : Thread[pool-5-thread-8,5,main] Collection failed for ResourcePool entity. Collection shall be retried in the next interval.
We have also enabled the xpl tracing but the errors are also very cryptic:
ERROR: Pipeline segment ThresholdingTriggerSegment accepts data of type PN6OAData15ObservationListE. The data type of the object passed as input is N6OAData20InstanceQueryRequestE. Ignoring the data...
ThresholdingTriggerSegment segment received an unwanted data. Returning error...
I have installed the Clout Optimizer on VmWere, I can access it using the URL, now my manager says integrate it with VMwere, I'm not sure it is possible or not, I'm totally new to this product, I coudn't see any document related to it.
If yes, can you please send me the document.
Also I have to integrate it with Omi.
Your help be highly appreciated.
recently we upgraded the cloud optimizer from 3.0 to 3.01 patch and after that we are not able to login into the cloud optimizer through webpage, gives invalid user name. Also, the ldap authentication is not working.
All, please note the availability of CO 3.01 - this is a minor-minor release providing these features over the previous release CO 3.0.
Here are the locations from where the patch installer, or VA update files could be downloaded for your particular needs.
Knowledge doc KM02528354, patch name VPVZIP_00006
Knowledge doc KM02528352, patch name VPVISO_00005
Knowledge doc KM02528350, patch name VPVINSTALLER_00006
Here's the list of updated documents for CO
Cloud Optimizer 3.01 Getting Started Guide
Cloud Optimizer 3.01 Installation Guide
Cloud Optimizer 3.01 Metric Definition Reference Guide
Cloud Optimizer 3.01 Online Help (PDF)
Cloud Optimizer 3.01 Open Source Third Party License Agreement
Cloud Optimizer 3.01 Release Notes
X86 Virtualization Technology Evolution to Cloud Optimizer
Cloud Optimizer 3.01 ComputeSensor User Guide
HPE CO Documentation Library
Cloud Optimizer 3.01 Complete Documentation Set
Hope this helps.
Trying to set up a new install and integrate with my AD for LDAP. When i click "Look Up User" i get the following in my trace file:
java.security.AccessControlException: Validate LDAP information operation is for admins only!
Is this an issue with the admin user specified (AD throwing the error) or is this a misconfiguration in my LDAP Properties in CO?
I have added vcenter server 6 to my cloud optimizer.(First Time)
test connection is successful
data collection status changed to data collection is progress, but it does not get completed .(I left it for 3 days still same)
I changed the logging level to 2 in vcenter server
max query value to 600
please help me on this.
The HPCO installation guide for the virtual appliance on Page 26 states ..
"If authentication is enabled, log in using the user name and password. The Admin page
I have always installed with the "default" settings, and not aware of an option to enable "authentication".
Question 1. Is there an option to enable authentication as part of the install ?
Question 2. How does one enable authentication when HPCO is installed WITHOUT authentication enabled ?
Whenever i try to add the CSA datasource, i get the following error:
"Failed to save the CSA information due to wrong credentials or URL"
The URL is https://IPADDRESS:8444 and the credentials are tested and working.
The VPV version is 3.0.0 and the CSA version is 4.7.
Is there anything else i should check???
I am new to the Cloud Optimizer product(vpv). Can any one of you help me with the document library for this product.
The kind of documents I am looking for is...installation, configuration, possibilities, limitations, etc.
Thanks in advance.
I've deployed a Cloud Optimizer 3.00 virtual appliance to see how the product works with OpenStack (Liberty), and I'm wondering if what I'm seeing in the interface is correct. I've added the KVM compute nodes (Ubuntu 14.04) as datasources, and have also added an OpenStack data source. Something that I noticed though is that all of the OpenStack instance names are showing up as their name in KVM - like instance-0000005d. I guess I would have expected a rationalization of the name to take place once the OpenStack datasource was added so that instance name was more intelligible, and I'm not sure if maybe I just have a configuration issue with Cloud Optimizer, or this is how its supposed to be?
I've also noticed that even though the OpenStack user that I've given to CloudOptimizer has access to several projects, I'm only seeing one project's worth data showing up. I'm using Keystone v3.0, and I'm not sure specifically what CloudOptimizer is expecting.
Is there anyone that could say whether the data that I'm seeing in Cloud Optimizer is what I should expect, or should it be presenting instance names more appropriately as well as pulling data from all tenants that the user account has access to?