Author Archives: Talwinder

Case Intake Process

 

The Case Intake Module is to efficiently gather all required and relevant information from the employee in order to process his/her claim event.

Case Intake process involves the following steps –

Step 1:- Basic Information – Request Details

Obtain request details from the caller –

Source of Request – Based on the source of request, the other fields would populate. If the source of request is other than employee, then the system would ask for source’s contact details

Reason for Request – Continuous / Intermittent

Last Day Worked

First date absence from work

Expected or Actual Full Duty Return to Work – This field is displayed only when the l eave type is an intermittent leave.

Step 2: Employee Information

Verify/validate employee indicative data that may be relevant to his/her leave of absence event.

Confirm employee demographic, eligibility and employment information.

Make changes as advised by the caller on the right hand side column of this step.

Step 3 – Leave Details Entry

Leave Details page will display different fields based upon the selection of the Request (Event) from the Intake Launch page.

For Every Leave reason, there are specific set of Questions.

Step 4 – Confirmation Page

This page is to reconfirm information to employees for validation.

This is just the read only version of all the editable pages in the previous steps of the case intake process and user can still change any information if wrongly entered.

Step 5 – Receipt Page

This page is the last page in the claim intake process and displays the key information related to claim just submitted by the intake specialist and set of script(s) that the intake manager needs to read to the user.

This page shows the claims that are assigned to user and also displays the unique case number created.

There are two key actions which will lead the user to the receipt page. Confirming the intake claim and exiting the intake claim without completing the intake claim (cancel Intake Case).

Integration Test

 

The integration test is a series of activities executed in the order expected in the production environment. Where a unit test examines a specific unit (activity, PCS, etc) of the system, the integration test looks at an entire module (or event flow). All channels (GUI, YBR, IVR, PCS, Batch) should be incorporated into this test. Each of the individual channels and units have previously been tested separately. The integration test evaluates all of the functionality together, including how they interface with each other.

Regression Test:

Regression testing applies to both ongoing and implementation teams. It is the process of ensuring that items that previously functioned properly continue to function correctly after each provision migration. Regression testing is performed to ensure that, after the system was modified for development of iteration or a system correction, it is functioning properly. Once iteration has been completed or the system is in production, the regression test becomes a base line; it verifies that anything that was unchanged still performs correctly and that the changes work within the system’s parameters.

Acceptance Test:

The Acceptance test is the process of submitting test results to the system’s end-user for approval or sign-off. The acceptance test cases are generated from the system use cases agreed to at the beginning of the project with the client. The test results can be shared with the client, ODG associates, or customer service associates. The results can be in the form of sample panels, reports, statements, checks, or files.

Testing Activities:

There are five main activities in the Testing Discipline. The five activities apply to each phase of testing: unit, integration, regression, and acceptance. There are specific modules for each process at a unit test level, as well as a module for integration, regression, and acceptance testing. Each testing activity applies to both implementation and ongoing teams.

Fig.4.9 Testing Activities

There are five main activities in the Testing Discipline. The five activities apply to each phase of testing: unit, integration, regression, and acceptance. There are specific modules for each process at a unit test level, as well as a module for integration, regression, and acceptance testing.

  • Plan Test

  • Design Test

  • Implement & EXECUTE Test

Unit Test

 

The unit test is the process of executing a set of test conditions on a unit of the system to verify that it performs its assigned function. The unit test begins after a unit of the system is tailored to fit the client team’s needs. It is completed when the functionality of the activity, PCS, YBR flow, etc., has no defects.

Note: This unit test is a combination of what was formerly known as unit testing and function testing.

Each component of the system (activity, PCS, YBR flow, etc.) is a unit of the system. Many associates are accountable for the quality of the configuration and unit test. The unit test is typically planned and designed by the Systems Analyst, while the Lead Systems Analyst and Benefits Operations Manager review the test plan. A few of the unit test conditions from the unit test plan should be informally executed after configuration and prior to handing the unit over for test execution. These informal test conditions should represent a sample of the unit’s functionality to confirm that the configuration was successful enough to continue further testing. The Setup Configuration Analyst assigned the unit test can then execute and evaluate the entire test plan.

Testing Phases

From a big picture, whenever possible, testing should start with actual client data as the input for the test. The design test spreadsheets (DTS) should be used as a starting point for test conditions. The test conditions should be tailored based on gap analysis performed during system analysis. From the gap analysis, the Lead Systems Analyst and Systems Analyst should identify gaps between the base system and the clients’ needs and add those test conditions to the design test spreadsheet. The Test Plan Query Log Guidelines and Template can be used to log any queries and responses during the test planning and test execution phases.

Post-Clone Verification:

The post-clone verification is the process of validating all of the cloned functionality. The post-clone verification begins after all answer sets are cloned. The purpose is to ensure that the actual results of each build meet the expected results outlined in the cloning worksheets.

SYSTEM TESTING CONTEXT

The Quality Assurance context within which Testing is performed can be defined by the V-Model of Software Quality.
The V-Model provides a structured quality framework which integrates the application development and testing lifecycles throughout the development process, and ensures that both verification and validation are applied to all deliverables within a system.It illustrates the development process, from defining benefits through converting the system into production, and helps focus on quality throughout the development process.The foundation of the V-Model is Phase Containment – finding and fixing errors within the stage of origin – by focusing on:
Testing through out the development life-cycle.
Early development of test requirements – concurrent with the development of application requirements.
Utilizing Verification and Validation techniques on key and high risk deliverables/work products to facilitate early detection of errors.
Because testing of completed applications usually results in identification of only 30-50% of application errors, these early checkpoints are key to delivering high quality applications.
Note: This approach can also be applied to multi-pass methodologies (e.g. Iterative/Incremental/Spiral); the important concept is that test planning/ preparation, verification/ validation, and traceability are performed throughout the project to promote phase containment and defect prevention.

HRIS Inbound

HRIS Inbound is the most important inbound feed
Almost all clients send some form of an HRIS inbound
The HRIS Inbound can have up to 9 records types (none are required)
EP record – Personal data
EA record – Mailing and email data
EJ record – Job data
EB record – Benefits specific data
ET record – Tax data
ED record – Banking data
ES record – Work schedule data
EO record – Organizational data
The client can send client defined attributes. These attributes are data points that the client would like to have in 360 that we do not have existing data elements for . The client sends us the attribute id and value. Attribute ID’s are cross client

INBOUND FEEDS

The Inbound Feed contains the employee data which is loaded into the system . Client is responsible to send us the data of all the employees in a particular format which is specified. Based on the data provided by client once the inbound feeds are loaded these data can be accessed in the application.
Inbound interfaces are used to load data into 360.Data is used for plan assignment .Data is used for claim assignment. Data is used for payment calculation. This data is also be used to aide in sending correspondence to the employee and employer
The HRIS Inbound contains employee level information .The Hours file contains employee hours worked and holiday/sick time. The Earnings file contains specific payment records for employees. He Pay Calendar contains the clients pay calendar by pay group. The LOA Historical Usage file contains employee level historical usage data. The Workers Compensation and Disability files hold third party data that can be used for reporting.
Each file loaded uses the same adapter for the load. This adapter is constantly running on our interface server. Whenever a new file is placed on the server, the adapter picks the file up and begins the process.If the adapter finds multiple files on the server, the oldest file will be loaded first.

Types of data Stored in TESTING ENVIRONMENT

Person Data: – Person data, also known as indicative data, is used to determined and administer each participant benefits. We store information about each participant employed by other client. The specific data stored varies from client to client. Person data may be considered request data or result data. Request data is the data that has not been processed. Result data is the data for completed activity.
Provision Data:  – In addition to person data, we need to store information about the working of the client’s benefits plan. Every client’s benefits plan has a different rule that dictates how the plan is run. Example, provision data indicates whether loans are allowed, whether transfer or fund reallocation are used, whether contribution are matched or not and which medical option participant can choose. Provision data allows us to keep this information on the TBA system in each client database.
Runtime Data: – Runtime data is a information that pertains to dates and time. Example, on a TBA system database there is a calendar that defines the context of the day.

TESTING Environments

PROD: – Every client has a database in PROD or Production. This is the live participant Environment and it is the most stable
QAC: -This is the primarily Testing Environment. Client Team does most of their testing in QAC and makes have several different QAC database. A staging database mirrors the client production database so that regression testing is performed prior to “Migrating To production”
QA: – The QA environment designed to a “Holding Ground “for testing new TBA functionality. New program are tested here by TBA base before being send to the client’s team in QAC. Client team will use a QA database when they are regression testing the new TBA release functionality
DEV: – Client team do not use this environment-it is where all the initial TBA programming is done .This environment is where the base code development is created .Because of this, it is the least stable environment.

Feasibility Study

After gathering the client requirement the next step is to make analyze of the client’s requirement. In this phase development team has to make communication with customers and make analysis of their requirement and test with respect to system. By making analysis this way it would be possible to make a report of identified area of problem.

Important Aspect
Important aspects of feasibility study are as follows
Preliminary work done: before handling this project I had study analyses & testing in previous semester and after that I had studied system specific testing in Hewitt. There is special training was given to us.
Resources available (RA):  All the resources like, proper platform, well designed test plan from the configuration side.
Confidence Factor (CF):   I did proper utilization of tools and well performed on platform namely 360 ASP Tool, AQT.
Capability Index (CI):  Sometimes the available resources that are tools & platform are not able to find appropriate participant or test cases according to the test plan because at that time no such participant exist in database. At that time we modified the participant according to the test plan and check the result.
Relative Significance Factor (RSF): This type of technology is useful for all those industries which provide services like services industry.
Relative Importance Factor (RIF): We are using some tools that are proprietary tools of Hewitt, are designed according to the client requirement. So the performances are very high and according to expectation.
System Integration factor (SIF):  Integration & regression testing is our responsibility and main phase into system development because without integration & regression testing we are not able to confirm that previous process like configuration and unit testing are well  done or not.
Inadequacy parameter (IP): The word inadequacy means insufficiency, lack or shortfall. For a technology.
The satisfactory completion of a project requires:
–    Resources Available (RA) :   High
–    Capability Index (CI) : High
–    Confidence Factor (CF):  High