- About Vivit
- LUGs & SIGs
- Vivit Blogs
- News & Events
- Knowledge Base
|HPE Software Products: Quality Management Best Practices and Methodology|
the standard reporting Dashboard is not meeting our need and we are looking for expanding our reports with KPIs/SLA on numerous levels based on the test case and defect ALM raw data. what are the 'industry standards' when it comes to that with HP ALM? KPIs in scope are TAT, Aging, FirstTimePass Rate, Reopen Rate etc.
HP ALM test case upload using excel add in is not working on my laptop. Even after the add in was installed from the HP ALM URL, we were not able to locate the exe file
I am currently working on a large project, and have been asked to look into best practice (in ALM) for retesting.
Here is my scenario:
Running at test instance (manually) and a step fails. Defect it opened for that step.
After the defect is fixed, it is set to Retest. (So far, so good).
The question is when they go back to run the test, they want to run the same (failing) instance.
I suggested they should create a new instance to run, so the failing instance can be tracked back to the
defect. So I am asking for advice on what is the best practice for this scenario.
Thanks in advance for your help.
Test case designing in HP ALM QC in Agile Methodology?
Do anyone of you have Test design samples on how they write their test cases in Agile for a specific feature-set that's being developed in multiple itereations?
I wanted to get the defects from the ALM using REST API using PHP . I am using GET method with header as Base64 Encoded with username:passowrd .
Below code is working in POSTMAN which is rest client but not working in PHP can some one help me to resolve the issue. When I execute the below code I am not getting any response in the Chrome Browser but same is working in POSTMAN client
Hello there, i am new here, i have some questions, hope you can help.
It would be better to link or add a reference to a mangement tool for testuser, the data will be centralized.
ALM 11.52 Patch 6 [Eng] was installed on a Japenese OS machine. We are facing some issues with regards to character sets. Is there a ALM module that can help parse the characters or any other alternative.
Does anyone know when patch 2 for ALM 12.21 will be released? I know that with this patch Windows 10 will be supported.
Are there any common techniques or methods to use to know what information can be archived in a Quality Center/ALM project?
We have over 1 millino Test Cases, over 2 million Test Sets, and over 2 million Test Executions in our largest ALM project. It's been rather sluggish lately and we'd like to clean out items that haven't been touched in awhile. Any sure fire techniques for easily identifying and purging everything old?
GENERAL PROBLEM : user groups permissions don't "work" as i hoped
CUSTOM CONTEXT : I wish to manage the ALM Project authorizations of users by accumulating basic rights. Thus, I would separate the permissions by using groups "elementary". A user will adhere to a sum of elementary groups.
PERMISSION CONTEXT : to administrate some users permissions, i'd like to create a "group" like PMOs witch can only create/modify/delete folder release folders, releases, cycles, milestones, scope items.
ENTITY CONTEXT OF "PMO'S GROUP" : Management / Releases / Project Planning and Tracking (PPT).
Actual customization of "Release sheet" for the PMO's Group : all the check boxes are selected.
PROBLEM : When I use ALM with a userId, that only the role "PMO", I can create/delete/modify all the items (folder, release, cycle, milestone, scope item, ..), but into the "Release scope item form" all fields are gray, empty and can not be changed.
QUESTIONS (I need your help) :
Is there an other check box in customization I forgot to select somewhere (in an other sheet) ?
Is it necessary to add an other particular permission ?
Do I use the "Script Editor" to force the change fields for this group ?
INFORMATION : if I use a user ID in the HP group Standard "Project Manager", the problem does not appear.
Create 5 requirements and 20 test cases – each requirement covered by 4 tests
Define STest, SIT and UAT cycles in release management.
For SystemTest – execute all the 20 Tests
For SystemIntegrationTest – Execute only 15 Tests
For UAT – Execute only 5 Tests
Challenge: Generate a report after each test phase showing all the requirements are passed.
What is the best way of implementing the above scenario in ALM?
I have developed an Excel utility which can export ALM project customization. My purpose is to:
Please see the attached zip file for Excel utility and pdf for screenshots. I want to share this with you and get some feedback to help me improve this utility. Thank you.
im working with HP ALM 12.20 Quality Center.
I would like to import Test Cases from Excel to the correct Test Plan folder in HPQC.
The regular import works fine, but not to the correct Test Plan folder.
My Folder hierarchy looks as follows:
How can import my test cases to the Sub-Sub-Sub-Folder?
Thanks for any help
I've read over the Project Topology Best Practices guide, and I understand the motives behind the recommendations, but I still feel I'm stuck at a cross-road.
Our organization has a large n-tier suite of web applications that deploys multiple releases a week (major, minor & maintenance). There are four client facing groups of applications that all integrate into a fifth common framework. A release often involves one of the groups plus the common group, but can involve more. In all, there are about 100 applications.
We've discussed and prototyped several options. Our biggest hurdles are,
Prototype 1: Global
One project for the organization. I liked this option to begin with, because it allows the most flexibility with respect to reporting across releases and collecting regression tests across applications, but then we read it is not a good idea, because of the lack of focus and the inability to scale. However after learning about the problems of the other prototypes, I've revisited this idea and thought about scaling it up by copying the project periodically (every year) and leaving the test runs and history behind in the previous year's project.
Prototype 2: Application
One project per application. This seems to be the recommended approach as the project focus is clear and it will likely scale well, however in our prototyping we've found it difficult to report on the release across all the projects involved. We also found we'd need a requirements intake project where it is determined which applications are involved. Then we'd need to create libraries of requirements to move them to the corresponding application projects and we found libraries to be finicky. That is, a library filter has to be exactly right across all projects or it won't contain what you think it does, specifically the release path and name.
Prototype 3: Release
One project per release. I like that a release project is easy to report on its progress. However, I found that to collect a suite of regression tests per application for the organization would require moving the tests to a common regression test project. That would require the use of libraries again. And then including the regression tests back into a release would require another library to import back into subsequent releases.
If anyone has experience with a similar situation, I'd really appreciate some feedback. We have the chance to "get it right" as we're redeploying ALM again soon. Thanks!
Does anybody have some examples of Worflow design/code on how to handle Approvals and Gating precesses with ALM?
we can use below query to do.
These kind of queries will be useful for auditing and reporting purposes.
( All the tables permissions stored in “TABLES” table and the highlighted in Yellow can be substituted with relevant info.
( like TB_GRANT_MODIFY or TB_GRANT_DELETE for other tables)
I would like to know the reason for HP ALM providing OTA / Rest API? Why do HP ALM product give API? Why do user needs customization on HP ALM?
Are there any limitations on using the HP ALM product directly? If we do customization what are the benifits we get?
Please help me on this.
My project is supporting multiple versions for the clients (e.g. 10.x, 1.x, 12.x) and all are active versions. Once an issue is reported in one version (say 10x), then it is fixed through a task (Task A) within that version and then fixed is synced synced to other future/past versions (Task B, Task C). As a QA, when mainiting the test case for such a scenario, what should be the best practice that I can follow? Approaches that I can think of are:
Approach 1 - Have duplicate test cases in Test Plan for each version so that we are able to track the effort that happened in each release ?
Approach 2 - Have just 1 test case corresponsing to the version where it was found. For execution of that test case after a fix is made in a different release, pull the same test case from other version (10.x) and execute in Test Lab under 11.x version.
I am less inclined towards the Approach 2 as it hampers the test reporting, coverage etc
Please share your valuable inputs on the best practice to handle such situations.
I have created a graph which shows the trend of test cases which were recently updated on 'history enanbled field'.
however this graph is showing only one week data and I need for one month date.
the description of the graph says
"The Test Planning - Trend Graph shows the history of changes to specific Test Plan fields in a project, at each point during a period of time. You specify the field for which you want to view the number of changes, and the time period for which you want to view data."
How do i change the date?
How to indentify which Test case is called by the child test cases. For example, I have a list of 10 test cases which i know is being called by some other parent test cases. How do i find those parents?
The dependencies tab for those child test cases (called TCs) is also empty. I need help with some query in analysis view of dashboard, so tha i can extract the test IDs