HPE Software Products: Enterprise Maps Practitioners Forum
Share |

DQL Error using naitive queryOpen in a New Window

Hi All,

I want to execute a native SQL query within a DQL query, based on the documentation I've built an easy example to understand the naitive clause.
But I am always getting an error while executing the query.

Here is the DQL query:

select s.name from (naitive (name) {select name_val as name from ryga_nodeArtifacts where discriminator = 'server'}) s

Here is the error:

Error: Error processing the DQL statement [dql=select s.name from (naitive (name) {select name_val as name from ryga_nodeArtifacts where discriminator = 'server'}) s]; nested exception is com.hp.systinet.dql.impl.parser.DqlParseExceptionImpl: DQL Parsing error: no viable alternative at input 'naitive' in phase PARSER at line 1, column 20 for query: "select s.name from (naitive (name) {select name_val as name from ryga_nodeArtifacts where discriminator = 'server'}) s".
SQLState:  null
ErrorCode: 0

 

Does any one have an idea how to fix this?

Thanks,
Tobi

 

 

 

Deploying Enterprise Maps in the Amazon EC2Open in a New Window

Possible. Look up and Launch the AMI I've just released by the following fields in EC2 AMIs in the AWS console.

AMI Name: import-ami-fh1psy9j
AMI ID: ami-fdc896ea
Description: HPE Enterprise Maps 3.20

 

The t2.medium server profile is fine. I see the following self-test report

 
CPU/Memory Performance Checker : SUCCESS: Performance index: 522 (good)
CPU/Memory Performance Checker : This check took 522ms.
Filesystem Performance Checker : SUCCESS: Performance index: 150 (good)
Filesystem Performance Checker : This check took 151ms.
Database Connection Performance Checker : SUCCESS: Performance index: 388 (good)

 

Error when retrieving large amount of data from EMOpen in a New Window

Hello,

We have a search API function in EM that retrieves products based on some criteria, when calling it with a normal search keyword it returns data successfully, but when calling it with empty string it returns this error

Encountered code generation error while compiling function \"f\": generated bytecode for method exceeds 64K limit.

The complete error stack is attached (error.png), it's clear that the reason is the data returned is very large, I thought to fix it we may need to modify the content-length header field in the response coming from EM,  so would this help ? and if yes, then how to change the content-length of EM response header ?

 

Incorrect result count number reteved from EM APIOpen in a New Window

I am working with API that retrieve Products from EM , using search criteria .

When searching for any text or any special character it works fine

As below

request : search for product named ABC

http://c9t24014.itcs.hpecorp.net:8080/em/remote/execute/scripts/SearchAPIv2/searchProduct?searchcriteria=ABC&practiceFilter=%27a0c2550e-c81c-407f-b4ba-9efd4735ef20%27,%27b828e336-335e-4145-a2a5-ce1c980e88a6%27,%274b5778e3-a60a-40c2-81e7-fdb52199d052%27,%27830cfb9f-3d69-4c55-b897-9fe37f88a9f1%27,%27dd5c5a4d-4f75-470d-b9b1-772888f6b6da%27,%273569dd6c-aaf7-47a1-b70b-59ce15620002%27,%2741877c51-3a8e-4db6-b620-9dbf476d5fc5%27,%27d4fa7d6a-3d54-45ff-a523-8d344e2987db%27&from=0&pageSize=5&includeCount=1

response : No product exist

{ success: true, data: 
{"resultCount":0,"records":[],"pageSize":"5","from":"0"} }

and with Special Character @

http://c9t24014.itcs.hpecorp.net:8080/em/remote/execute/scripts/SearchAPIv2/searchProduct?searchcriteria=@&practiceFilter=%27a0c2550e-c81c-407f-b4ba-9efd4735ef20%27,%27b828e336-335e-4145-a2a5-ce1c980e88a6%27,%274b5778e3-a60a-40c2-81e7-fdb52199d052%27,%27830cfb9f-3d69-4c55-b897-9fe37f88a9f1%27,%27dd5c5a4d-4f75-470d-b9b1-772888f6b6da%27,%273569dd6c-aaf7-47a1-b70b-59ce15620002%27,%2741877c51-3a8e-4db6-b620-9dbf476d5fc5%27,%27d4fa7d6a-3d54-45ff-a523-8d344e2987db%27&from=0&pageSize=5&includeCount=1

response :

{ success: true, data: 
{"resultCount":3,"records":[{"_isFavorite":"true","lifecycleStatus":"Proposal","keywords":"","c_referralService":"false","industries":"Manufacturing , Financial Services , Communications","_revisionTimestamp":"2016-10-09T12:49:17.952Z","name":"!@#$%^&*()_+-={}[]\":;<>/?\\|! nn Owners","description":"!@#$%^&*()_+-={}[]\":;<>/?\\|! nn Owners\n!@#$%^&*()_+-={}[]\":;<>/?\\|! nn Owners\n!@#$%^&*()_+-={}[]\":;<>/?\\|! nn Owners","_uuid":"117cd59e-c6b7-408c-a507-65c7fa1d15bc","categories":"Cloud , Enterprise Processing , Analytics","row_id":1,"c_elevatorPitch":"Owners"},{"_isFavorite":"false","lifecycleStatus":"Prototype","keywords":"","c_referralService":"false","industries":"Communications , Financial Services , Government , Other Industries","_revisionTimestamp":"2016-09-26T10:07:17.121Z","name":"1 new @","description":"test $% neww  &\n~!@#$%^&*()_+{}","_uuid":"735859b0-cbb6-4603-b3ea-2a7d4bd03ad6","categories":"Applications and Data , Internet of Things , Enterprise Processing","row_id":2,"c_elevatorPitch":"test $% neww"},{"_isFavorite":"false","lifecycleStatus":"Prototype","keywords":"","c_referralService":"false","industries":"Communications , Financial Services","_revisionTimestamp":"2016-10-05T11:29:43.734Z","name":"~!@#$%^&*()_+?><}{PO\":LKJH","description":"sdsd","_uuid":"6beea8e7-a1ea-4226-944f-fe9817f63bdd","categories":"Cloud , Analytics","row_id":3,"c_elevatorPitch":"sdsd"}],"pageSize":"5","from":"0"} }

the issue is when using Sequence of special characters .we got the count negative number  -1 .as belwo

request :

http://c9t24014.itcs.hpecorp.net:8080/em/remote/execute/scripts/SearchAPIv2/searchProduct?searchcriteria=!@#$%^&*()_+-={}[]&practiceFilter=%27a0c2550e-c81c-407f-b4ba-9efd4735ef20%27,%27b828e336-335e-4145-a2a5-ce1c980e88a6%27,%274b5778e3-a60a-40c2-81e7-fdb52199d052%27,%27830cfb9f-3d69-4c55-b897-9fe37f88a9f1%27,%27dd5c5a4d-4f75-470d-b9b1-772888f6b6da%27,%273569dd6c-aaf7-47a1-b70b-59ce15620002%27,%2741877c51-3a8e-4db6-b620-9dbf476d5fc5%27,%27d4fa7d6a-3d54-45ff-a523-8d344e2987db%27&from=0&pageSize=5&includeCount=1

response :

{ success: true, data: 
{"resultCount":-1,"records":[{"_isFavorite":"true","lifecycleStatus":"Proposal","keywords":"","c_referralService":"false","industries":"Manufacturing , Financial Services , Communications","_revisionTimestamp":"2016-10-09T12:49:17.952Z","name":"!@#$%^&*()_+-={}[]\":;<>/?\\|! nn Owners","description":"!@#$%^&*()_+-={}[]\":;<>/?\\|! nn Owners\n!@#$%^&*()_+-={}[]\":;<>/?\\|! nn Owners\n!@#$%^&*()_+-={}[]\":;<>/?\\|! nn Owners","_uuid":"117cd59e-c6b7-408c-a507-65c7fa1d15bc","categories":"Cloud , Enterprise Processing , Analytics","row_id":1,"c_elevatorPitch":"Owners"},{"_isFavorite":"false","lifecycleStatus":"Prototype","keywords":"","c_referralService":"false","industries":"Communications , Financial Services , Government , Other Industries","_revisionTimestamp":"2016-09-26T10:07:17.121Z","name":"1 new @","description":"test $% neww  &\n~!@#$%^&*()_+{}","_uuid":"735859b0-cbb6-4603-b3ea-2a7d4bd03ad6","categories":"Applications and Data , Internet of Things , Enterprise Processing","row_id":2,"c_elevatorPitch":"test $% neww"},{"_isFavorite":"false","lifecycleStatus":"Prototype","keywords":"","c_referralService":"false","industries":"Communications , Financial Services","_revisionTimestamp":"2016-10-05T11:29:43.734Z","name":"~!@#$%^&*()_+?><}{PO\":LKJH","description":"sdsd","_uuid":"6beea8e7-a1ea-4226-944f-fe9817f63bdd","categories":"Cloud , Analytics","row_id":3,"c_elevatorPitch":"sdsd"}],"pageSize":250,"from":0} }

 

 

 

 

 

 

Error in query result When the condition is Hash sign #Open in a New Window

I am using a query to retreve products from EM based on search test .when the test is a especail character as  (!  or $ or ^ ) it is working fine  but when i use # i got error as belwo 

For example when i use the charcter @ as belwo i got result

Request :

http://c9t24014.itcs.hpecorp.net:8080/em/remote/execute/scripts/SearchAPIv2/searchProduct?searchcriteria=@&practiceFilter=%27a0c2550e-c81c-407f-b4ba-9efd4735ef20%27,%27b828e336-335e-4145-a2a5-ce1c980e88a6%27,%274b5778e3-a60a-40c2-81e7-fdb52199d052%27,%27830cfb9f-3d69-4c55-b897-9fe37f88a9f1%27,%27dd5c5a4d-4f75-470d-b9b1-772888f6b6da%27,%273569dd6c-aaf7-47a1-b70b-59ce15620002%27,%2741877c51-3a8e-4db6-b620-9dbf476d5fc5%27,%27d4fa7d6a-3d54-45ff-a523-8d344e2987db%27&from=0&pageSize=5&includeCount=1

response :

{ success: true, data: 
{"resultCount":3,"records":[{"_isFavorite":"true","lifecycleStatus":"Proposal","keywords":"","c_referralService":"false","industries":"Manufacturing , Financial Services , Communications","_revisionTimestamp":"2016-10-09T12:49:17.952Z","name":"!@#$%^&*()_+-={}[]\":;<>/?\\|! nn Owners","description":"!@#$%^&*()_+-={}[]\":;<>/?\\|! nn Owners\n!@#$%^&*()_+-={}[]\":;<>/?\\|! nn Owners\n!@#$%^&*()_+-={}[]\":;<>/?\\|! nn Owners","_uuid":"117cd59e-c6b7-408c-a507-65c7fa1d15bc","categories":"Cloud , Enterprise Processing , Analytics","row_id":1,"c_elevatorPitch":"Owners"},{"_isFavorite":"false","lifecycleStatus":"Prototype","keywords":"","c_referralService":"false","industries":"Communications , Financial Services , Government , Other Industries","_revisionTimestamp":"2016-09-26T10:07:17.121Z","name":"1 new @","description":"test $% neww  &\n~!@#$%^&*()_+{}","_uuid":"735859b0-cbb6-4603-b3ea-2a7d4bd03ad6","categories":"Applications and Data , Internet of Things , Enterprise Processing","row_id":2,"c_elevatorPitch":"test $% neww"},{"_isFavorite":"false","lifecycleStatus":"Prototype","keywords":"","c_referralService":"false","industries":"Communications , Financial Services","_revisionTimestamp":"2016-10-05T11:29:43.734Z","name":"~!@#$%^&*()_+?><}{PO\":LKJH","description":"sdsd","_uuid":"6beea8e7-a1ea-4226-944f-fe9817f63bdd","categories":"Cloud , Analytics","row_id":3,"c_elevatorPitch":"sdsd"}],"pageSize":"5","from":"0"} }

But when i use the character # as belwo i got error

Request

http://c9t24014.itcs.hpecorp.net:8080/em/remote/execute/scripts/SearchAPIv2/searchProduct?searchcriteria=#&practiceFilter=%27a0c2550e-c81c-407f-b4ba-9efd4735ef20%27,%27b828e336-335e-4145-a2a5-ce1c980e88a6%27,%274b5778e3-a60a-40c2-81e7-fdb52199d052%27,%27830cfb9f-3d69-4c55-b897-9fe37f88a9f1%27,%27dd5c5a4d-4f75-470d-b9b1-772888f6b6da%27,%273569dd6c-aaf7-47a1-b70b-59ce15620002%27,%2741877c51-3a8e-4db6-b620-9dbf476d5fc5%27,%27d4fa7d6a-3d54-45ff-a523-8d344e2987db%27&from=0&pageSize=5&includeCount=1

response :

"success":false,"message":"Encountered code generation error while compiling function \"f\": generated bytecode for method exceeds 64K limit. (query-result-conversion#1)"}

And this issue is happing for Hash sign only other special characters works fine

 

 

Error when executing DQL queryOpen in a New Window

Hello,

In our EM based application, we are structuring a DQL query to retrieve products from EM according to some criteria, here is the query:

select p._uuid as prodId,p.name as prodName from productArtifact p
left join businessServiceArtifact feature using p.aggregates
left join goalArtifact goal using p.associatedWithIncoming
left join driverArtifact driver using p.influencedBy
left join constraintArtifact constraint using p.influencedBy
where (lower(p.name) like :searchcriteriaWildcard OR lower(p.description) like :searchcriteriaWildcard OR lower(p.keyword.val) like :searchcriteriaWildcard
or lower(feature.name) like :searchcriteriaWildcard OR lower(feature.description) like :searchcriteriaWildcard
or lower(goal.name) like :searchcriteriaWildcard OR lower(goal.description) like :searchcriteriaWildcard
or lower(driver.name) like :searchcriteriaWildcard OR lower(driver.description) like :searchcriteriaWildcard
or lower(constraint.name) like :searchcriteriaWildcard OR lower(constraint.description) like :searchcriteriaWildcard
)

when executing it, I get the attached error (error1.png), I noticed that the error occurrs when both clauses (left join driverArtifact driver using p.influencedBy) and (left join constraintArtifact constraint using p.influencedBy) exist at the same time because when I remove one of them and keep the other the query run successfully, I am guessing the reason is that in both those clauses we are using (using p.influencedBy), I tried to change the two clauses to be like this

left join driverArtifact driver on bind(p.influencedBy)
left join constraintArtifact constraint on bind(p.influencedBy)

but got the same error, when I change one of the clauses to be like this:
left join driverArtifact driver on driver._uuid = p.influencedBy.target
I get the error attached (error2.png)

can you please advise how to structure the query correctly keeping both the relations

Thanks,
Hossam.

 

Archimate 3.0 - when included?Open in a New Window

Hello EM team,

is there already a rough timeline to include Archimate 3.0 in EM?

Thanks,
Piotr

 

Assign value to Artifact PropertyOpen in a New Window

Dear HPE EM Experts

I am trying to do something like this from the javascript&colon; appFinancialProfileArtifact.annualCostHw = 10

What is the proper way to adjust properties from the javascipt?

Thank you in advance

 

Out of memory errorsOpen in a New Window

we have received the following error twice in the last 2 days

 

18:32:17,278 ERROR [org.hornetq.core.client] (Thread-4764 (HornetQ-remoting-thre ads-HornetQServerImpl::serverUUID=7f9698ba-6486-11e6-8bf3-912e3c231071-300964859 -1211190591)) HQ214017: Caught unexpected Throwable: java.lang.OutOfMemoryError: GC overhead limit exceeded

 

please advise on how to correc this issue.

application server has 32GB of memory

error occured whilst attempting a domain export (small domain 606 objects)

 

Bulk change artefact typeOpen in a New Window

Is it possible to change a group of existing objects from one artefact type to another?   for example requirement to driver?

 

Automatic fill of Hardware Cost in Application Component Financial ProfileOpen in a New Window

Dear HPE EM Experts

There is a financial profile for each application component artifact. My question relates to the best approach to fill this information automatically. For example, I have already implemented some logic using javascript and now I want to create a trigger in order to publish results of these calculations into Hardware Costs field.

Your answers and opinion are highly appreciated.

Thank you very much in advance

 

REST API - where to find it?Open in a New Window

There is a Java library for the REST interface OOTB. Search for “Atom REST Client” in EM documentation, it is in the Customization Guide (Developer Guide in older versions of EM). There is also a demo of the java client in the EM installation directory (EM_HOME\demos\client\rest), you can build and execute it against a running EM instance with a supplied run.sh/bat.

Thanks to Pavel Zavora for providing this information.

 

Configuring EMAPS appliance on the virtual machineOpen in a New Window

Hi Experts,

Am looking forward to Evaluate  the HP EMAP .

My question is , Can I host virtual appliance on a virtual machine ( windows 2012) using oracle virtual box ?

 

Regards,

Ashish

 

 

 

Best practice for backup?Open in a New Window

Hi,

Is there any published best practice to manage and schedule backups within Enterprise Maps?   Either within the UI, CLI or at the database level?

Thanks,
Dave

 

 

Total cost of application componentOpen in a New Window

Dear Enterprise Maps Experts

Each application component has its own financial profile. This financial profile consist of several numeric values such an “Annual Cost of Software”, “Annual Cost Labor Internal”, etc.

There is also another parameter named “Annual Cost Total”. From my understanding, this one should be a sum of all costs and should be calculated automatically. Unfortunately it doesn’t work this way and I have to change it manually. 

Can you please tell me if it’s a bug or there is a logic behind it? At the same time, can you please tell me what can be done, in order to automate this calculation?

Thank you very much in advance

 

Unable to get review artifacts when logging with a specific userOpen in a New Window

Hello,

In our project we need to add a review (c_reviewArtifact) to EM and be able to retrieve it for display, we add this artifact successfully with no problem, when retireving this artifact with Http get method to this URL

http://c9t21567.itcs.hpecorp.net:8080/em/platform/restSecure/artifact/c_reviewArtifact/<reviewId>?alt=application%2Fatom%2Bxml

where <reviewId> is the Id of the review to be retrieved

 

when retrieving it with my user Private Info Erased which has solution owner role the data of the review returned successfully but when trying to retrieve it with another user "Private Info Erased" which is normal user without solution owner or admin roles we get this error

 

"Principal Private Info Erased has no permission to get artifact of uuid bea98634-0f9f-4a3c-804f-db3f53f7d913. Security for non-governed artifacts is defined by domain default access rights. For governed artifacts, security is defined by lifecycle process. User can lose access rights to a governed artifact when it moves to the next lifecycle stage."

The complete stack trace for the error is attached.

So why do we get this error, is there some missing configuration in EM for this normal user or for the review artifact itself

 

Thanks,

Hossam.

 

hpe-em service gets stopped after integrating with CSAOpen in a New Window

Deploy EM(EnterpriseMaps)OVA file, change ip address as "static" and provide new hostname/fqdn.Donot use default hostname/fqdn which came with OVA Deploment.Replace old fqdn with new fqdn in required places.

Now integrate EM with CSA using csa.download,csa.install,sso.configure.

Reboot the machine .It will take 30mins min to comeup.Starting EM takes time it will throw out "Unsuccessfully started EM".

EM service will be stopped and we cannot launch EM/CSA home page.

Attached the screenshot  for reference.I have the server log also but its huge to attach.

 

 

HQ224037: cluster connection Failed to handle messageOpen in a New Window

Hello,

 I have locally created a HornetQ cluster containing 2 nodes. Both hornets starts correctly and i am able to send and receive messages from them. The problem happens when i stop the first hornet. The cluster shuts down and i am no more able to send or receive messages. Both servers are live.

I get the following error:

ERROR [org.hornetq.core.server] HQ224037: cluster connection Failed to handle message: java.lang.IllegalStateException: Cannot find binding for jms.queue.testQueue1d47
4f4d-0516-11e5-a3c8-5578484ee9df
at org.hornetq.core.server.cluster.impl.ClusterConnectionImpl$MessageFlowRecordImpl.doConsumerClosed(ClusterConnectionImpl.java:1570) [hornetq-server.jar:]
at org.hornetq.core.server.cluster.impl.ClusterConnectionImpl$MessageFlowRecordImpl.onMessage(ClusterConnectionImpl.java:1288) [hornetq-server.jar:]
at org.hornetq.core.client.impl.ClientConsumerImpl.callOnMessage(ClientConsumerImpl.java:1085) [hornetq-core-client.jar:]
at org.hornetq.core.client.impl.ClientConsumerImpl.access$400(ClientConsumerImpl.java:57) [hornetq-core-client.jar:]
at org.hornetq.core.client.impl.ClientConsumerImpl$Runner.run(ClientConsumerImpl.java:1220) [hornetq-core-client.jar:]
at org.hornetq.utils.OrderedExecutorFactory$OrderedExecutor$1.run(OrderedExecutorFactory.java:106) [hornetq-core-client.jar:]
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [rt.jar:1.8.0_101]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [rt.jar:1.8.0_101]
at java.lang.Thread.run(Unknown Source) [rt.jar:1.8.0_101]

Here is my configuration for first node:

<configuration xmlns="urn:hornetq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:hornetq /schema/hornetq-configuration.xsd">

<paging-directory>${data.dir:../data}/paging</paging-directory>

<bindings-directory>${data.dir:../data}/bindings</bindings-directory>

<journal-directory>${data.dir:../data}/journal</journal-directory>

<journal-min-files>10</journal-min-files>

<shared-store>true</shared-store>
<backup>false</backup>
<check-for-live-server>true</check-for-live-server>
<allow-failback>true</allow-failback>
<cluster-user>clusteruser</cluster-user>
<cluster-password>cluster123</cluster-password>
<clustered>true</clustered>

<large-messages-directory>${data.dir:../data}/large-messages</large-messages-directory>

<connectors>
<connector name="netty">
<factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
<param key="host" value="${hornetq.remoting.netty.host:localhost}"/>
<param key="port" value="${hornetq.remoting.netty.port:5445}"/>
</connector>

<connector name="netty-throughput">
<factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
<param key="host" value="${hornetq.remoting.netty.host:localhost}"/>
<param key="port" value="${hornetq.remoting.netty.batch.port:5455}"/>
<param key="batch-delay" value="50"/>
</connector>
</connectors>

<acceptors>
<acceptor name="netty">
<factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
<param key="host" value="${hornetq.remoting.netty.host:localhost}"/>
<param key="port" value="${hornetq.remoting.netty.port:5445}"/>
</acceptor>

<acceptor name="netty-throughput">
<factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
<param key="host" value="${hornetq.remoting.netty.host:localhost}"/>
<param key="port" value="${hornetq.remoting.netty.batch.port:5455}"/>
<param key="batch-delay" value="50"/>
<param key="direct-deliver" value="false"/>
</acceptor>
</acceptors>

<broadcast-groups>
<broadcast-group name="bg-group1">
<group-address>231.7.7.7</group-address>
<group-port>9876</group-port>
<broadcast-period>5000</broadcast-period>
<connector-ref>netty</connector-ref>
</broadcast-group>
</broadcast-groups>

<discovery-groups>
<discovery-group name="dg-group1">
<group-address>231.7.7.7</group-address>
<group-port>9876</group-port>
<refresh-timeout>10000</refresh-timeout>
</discovery-group>
</discovery-groups>

<cluster-connections>
<cluster-connection name="my-cluster">
<address>jms</address>
<connector-ref>netty</connector-ref>
<discovery-group-ref discovery-group-name="dg-group1"/>
</cluster-connection>
</cluster-connections>

<security-settings>
<security-setting match="#">
<permission type="createNonDurableQueue" roles="guest"/>
<permission type="deleteNonDurableQueue" roles="guest"/>
<permission type="consume" roles="guest"/>
<permission type="send" roles="guest"/>
</security-setting>
</security-settings>

<address-settings>
<!--default for catch all-->
<address-setting match="#">
<dead-letter-address>jms.queue.DLQ</dead-letter-address>
<expiry-address>jms.queue.ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<max-size-bytes>10485760</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>BLOCK</address-full-policy>
</address-setting>
</address-settings>

</configuration>

And hoernet-jms for first node:

<configuration xmlns="urn:hornetq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:hornetq /schema/hornetq-jms.xsd">

<connection-factory name="NettyConnectionFactory">
<xa>true</xa>
<ha>true</ha>
<connectors>
<connector-ref connector-name="netty"/>
</connectors>
<discovery-group-ref discovery-group-name="dg-group1"/>
<entries>
<entry name="/ConnectionFactory"/>
</entries>

<!-- Pause 1 second between connect attempts -->
<retry-interval>100</retry-interval>

<!-- Multiply subsequent reconnect pauses by this multiplier. This can be used to
implement an exponential back-off. For our purposes we just set to 1.0 so each reconnect
pause is the same length -->
<retry-interval-multiplier>1.0</retry-interval-multiplier>
<connection-ttl>-1</connection-ttl>
<failover-on-initial-connection>true</failover-on-initial-connection>
<failover-on-server-shutdown>true</failover-on-server-shutdown>
<client-failure-check-period>60</client-failure-check-period>

<!-- Try reconnecting number of times (-1 means "unlimited") -->
<reconnect-attempts>50</reconnect-attempts>
<confirmation-window-size>10</confirmation-window-size>
</connection-factory>

<queue name="DLQ">
<entry name="/queue/DLQ"/>
</queue>

<queue name="ExpiryQueue">
<entry name="/queue/ExpiryQueue"/>
</queue>

<queue name="testQueue">
<entry name="/queue/testQueue"/>
</queue>

</configuration>

Here is configuration for second node:

 

<configuration xmlns="urn:hornetq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:hornetq /schema/hornetq-configuration.xsd">

<paging-directory>${data.dir:../data}/paging</paging-directory>

<bindings-directory>${data.dir:../data}/bindings</bindings-directory>

<journal-directory>${data.dir:../data}/journal</journal-directory>

<journal-min-files>10</journal-min-files>

<backup>false</backup>
<shared-store>true</shared-store>
<failover-on-shutdown>true</failover-on-shutdown>
<allow-failback>true</allow-failback>
<check-for-live-server>true</check-for-live-server>
<cluster-user>clusteruser</cluster-user>
<cluster-password>cluster123</cluster-password>
<clustered>true</clustered>

<large-messages-directory>${data.dir:../data}/large-messages</large-messages-directory>

<connectors>
<connector name="netty">
<factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
<param key="host" value="${hornetq.remoting.netty.host:localhost}"/>
<param key="port" value="${hornetq.remoting.netty.port:5446}"/>
</connector>

<connector name="netty-throughput">
<factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
<param key="host" value="${hornetq.remoting.netty.host:localhost}"/>
<param key="port" value="${hornetq.remoting.netty.batch.port:5456}"/>
<param key="batch-delay" value="50"/>
</connector>
</connectors>

<acceptors>
<acceptor name="netty">
<factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
<param key="host" value="${hornetq.remoting.netty.host:localhost}"/>
<param key="port" value="${hornetq.remoting.netty.port:5446}"/>
</acceptor>

<acceptor name="netty-throughput">
<factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
<param key="host" value="${hornetq.remoting.netty.host:localhost}"/>
<param key="port" value="${hornetq.remoting.netty.batch.port:5456}"/>
<param key="batch-delay" value="50"/>
<param key="direct-deliver" value="false"/>
</acceptor>
</acceptors>

<broadcast-groups>
<broadcast-group name="bg-group1">
<group-address>231.7.7.7</group-address>
<group-port>9876</group-port>
<broadcast-period>5000</broadcast-period>
<connector-ref>netty</connector-ref>
</broadcast-group>
</broadcast-groups>

<discovery-groups>
<discovery-group name="dg-group1">
<group-address>231.7.7.7</group-address>
<group-port>9876</group-port>
<refresh-timeout>10000</refresh-timeout>
</discovery-group>
</discovery-groups>

<cluster-connections>
<cluster-connection name="my-cluster">
<address>jms</address>
<connector-ref>netty</connector-ref>
<discovery-group-ref discovery-group-name="dg-group1"/>
</cluster-connection>
</cluster-connections>

<security-settings>
<security-setting match="#">
<permission type="createNonDurableQueue" roles="guest"/>
<permission type="deleteNonDurableQueue" roles="guest"/>
<permission type="consume" roles="guest"/>
<permission type="send" roles="guest"/>
</security-setting>
</security-settings>

<address-settings>
<!--default for catch all-->
<address-setting match="#">
<dead-letter-address>jms.queue.DLQ</dead-letter-address>
<expiry-address>jms.queue.ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<max-size-bytes>10485760</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>BLOCK</address-full-policy>
</address-setting>
</address-settings>

</configuration>

Hornet-jms for second node:

 

<configuration xmlns="urn:hornetq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:hornetq /schema/hornetq-jms.xsd">

<connection-factory name="NettyConnectionFactory">
<xa>true</xa>
<ha>true</ha>
<connectors>
<connector-ref connector-name="netty"/>
</connectors>
<discovery-group-ref discovery-group-name="dg-group1"/>
<entries>
<entry name="/ConnectionFactory"/>
</entries>

<!-- Pause 1 second between connect attempts -->
<retry-interval>100</retry-interval>

<!-- Multiply subsequent reconnect pauses by this multiplier. This can be used to
implement an exponential back-off. For our purposes we just set to 1.0 so each reconnect
pause is the same length -->
<retry-interval-multiplier>1.0</retry-interval-multiplier>
<connection-ttl>-1</connection-ttl>
<failover-on-initial-connection>true</failover-on-initial-connection>
<failover-on-server-shutdown>true</failover-on-server-shutdown>
<client-failure-check-period>60</client-failure-check-period>

<!-- Try reconnecting number of times (-1 means "unlimited") -->
<reconnect-attempts>50</reconnect-attempts>
<confirmation-window-size>10</confirmation-window-size>
</connection-factory>

<queue name="DLQ">
<entry name="/queue/DLQ"/>
</queue>

<queue name="ExpiryQueue">
<entry name="/queue/ExpiryQueue"/>
</queue>

<queue name="testQueue">
<entry name="/queue/testQueue"/>
</queue>

</configuration>

Please help me find the answer. If you need any more infomration i'll provide them.
Thanks in advance.
Arrick

 

SPARXEA continually pauses or hangsOpen in a New Window

Hi I have a SPARXEA installation ( 12.1 Build 1229 ) which is connected to an Emap repository - the model in SPARXEA has been baselined in Emaps. The Emaps installation was running in Azure and has now discontinue. ( No longer active ) although my Emaps connection configs in SPARXEA still point to the Emaps repository. I have noticed that SPARXEA hangs as i try to navigate the model from package to package. I have removed the 'default' Emaps repository config setting so theoretically SPARXEA should have no idea a repository is connected. My problem is the pauses and hangs are making the tools almost unusable. Do I have to remove all Emaps repository connections ? Do I have to remove any 'HP EM Shared Repository' classes added to the model when i baselined it ? Is the pause / delay due to the quiescent Emaps or something else ? Any ideas ?

 

Tuning JVM on Demo applianceOpen in a New Window

Hello all,

Looking for advice on optimizing our 64 bit JVM heap size on Debian as in our demo appliance. 

I know this is a complex topic with many variables including physical RAM, OS, page size in use, etc. so looking for some developer feedback on what has been found to work best in practice with EM app. 

Instance in question is a modified demo appliance (Debian/JBoss) running in a VMWare VM with 8cpus and 32G RAM. 

 

Contact Us

Vivit Worldwide
P.O. Box 18510
Boulder, CO 80308

Email: info@vivit-worldwide.org

Mission

Vivit's mission is to serve
the Hewlett Packard
Enterprise User
Community through
Advocacy, Community,
and Education.