Competency refers to underlying characteristics that enable performance in a job role. These characteristics find expression in behaviour and their pres­ence or absence can explain why one person succeeded in a mission, whereas in identical conditions another person did not.

Richard Boyatzis, a consultant to McBer and Company, defined the term competency as – ‘an underlying characteristic of an individual that is causally related to effective or superior performance in a job’.

According to Boyatzis, competency includes ‘motive, trait, and skill, aspect of one’s self-image or social role, or a body of knowledge which he or she uses’.

Learn about:-

ADVERTISEMENTS:

1. Meaning and Definition of Competency 2. Contributories of the Competency Movement 3. Measurement 4. Approaches 5. Applications.


Competency: Meaning, Definition, Measurement, Approaches and Applications

Competency – Meaning and Definition

Competency refers to underlying characteristics that enable performance in a job role. These characteristics find expression in behaviour and their pres­ence or absence can explain why one person succeeded in a mission, whereas in identical conditions another person did not.

Richard Boyatzis, a consultant to McBer and Company, defined the term competency as – ‘an underlying characteristic of an individual that is causally related to effective or superior performance in a job’. According to Boyatzis, competency includes ‘motive, trait, and skill, aspect of one’s self-image or social role, or a body of knowledge which he or she uses’.

Further elaboration of the key expressions in this definition would help you understand this definition better:

ADVERTISEMENTS:

1. Underlying Characteristic:

They explained that ‘underlying charac­teristic’ means competency is a fairly deep and enduring part of a person’s personality and can predict behaviour in a wide variety of situations and job tasks.

2. Causally Related:

This implies that competency causes or predicts behaviour and performance.

ADVERTISEMENTS:

3. Criterion Referenced:

This means that for the sake of objectivity in assessing who does something well or poorly, a specific criterion or standard needs to be defined. Subsequently all performances need to be compared with respect to the laid down standard. This comparison of actual performance with the laid down criterion becomes the basis for deciding who is an exemplary performer and who is not. In an organization, this criterion for performance may vary from function to function as well as across levels of hierarchy.

For example, for a sales executive a performance criterion could be meeting the annual sales target, whereas for the maintenance engineer it could be ensur­ing that uptime of critical machinery meets targeted levels. For the example related to the sales executive, when we shift levels and move up to the sales manager, the criterion could be meeting the sales target for the entire sales team.

Similarly, for the maintenance manager it could be optimizing the total cost of ownership (TCO) and keeping it within targeted levels. Instead of a single criterion, multiple criteria could also be used which can get combined using a weighted average to generate a scorecard which would perhaps be a more balanced indicator of performance.

ADVERTISEMENTS:

Organizations that have implemented a key result area (KRA) based system for measuring performance could use KRA scores with cut-offs defined in absolute terms (for example, 4 on a scale of 5) or relative terms (say top 20% of the distribution of KRA scores) for that category of employees. Similarly, appraisal rating could also be considered as a criterion where the best possible rating could be taken to be the qualifying requirement for defining exem­plary performance.

His or her superior product knowledge enables him or her to demonstrate a behaviour that leads to achieving the performance goal. An average salesperson with adequate but not superior product knowledge would not have closed the sale. If this is true, product knowledge can become a predictor of success in the sales job. It fits nicely into how Spencer and Spencer (1993) have qualified competency—’a characteristic is not a competency unless it predicts something meaningful in the real world’.


Competency – Top 7 Contributories of the Competency Movement

1. John Flanagan:

The work done by John Clemans Flanagan is regarded as the precursor to the emergence of competency as a concept. In 1954 in a landmark article, Flanagan presented the concept of critical incident technique (CIT).

Flanagan was a pioneer in the field of aviation psychology. During the Second World War, he was commissioned by the US Army Air Corps to lead an aviation psychology project that would develop tests for qualifying and identifying pilots who would be suitable for carrying out combat missions.

ADVERTISEMENTS:

The programme involved around 150 psychologists and over 1,400 research assistants. This arti­cle showcased a series of studies conducted by this US Air Force aviation psy­chology programme between 1941 and 1946. These studies focused on factors associated with pilot performance.

Such factors would include ‘failure in learning how to fly’, ‘reasons for failure in bombing missions’, ‘problems in combat lead­ership’ and ‘disorientation while flying’. From these studies, Flanagan came to the conclusion that job analysis should aim at determining the critical require­ments of the job.

These requirements should include those factors that make the difference between success and failure in carrying out a job assigned on a signifi­cant number of occasions. This article put forward a five-step approach for conducting a CIT-based study of a job activity.

The five steps are:

ADVERTISEMENTS:

1. Determining the general aim of the activity

2. Developing plans and specifications to collect factual incidents regard­ing the activity

3. Collecting the data

4. Analysing the data

ADVERTISEMENTS:

5. Interpreting and reporting the requirements of this activity.

Thus CIT identified and classified behaviours associated with success and failure in carrying out a given activity. Trainee pilots and their observers were asked to recount incidents where a trainee pilot had either succeeded or failed. From those incident reports, the team of psychologists under Flanagan tried to deter­mine the personal characteristics that had a bearing on success.

Then tests were devised to capture those characteristics. The test scores in turn identified those trainee pilots who were highly likely to successfully complete the training. This programme resulted in significant savings in training costs as failure rate as well as accidents came down drastically. Flanagan was honoured with the Legion of Merit.

In 1946, he founded the American Institutes for Research, a non-profit behavioural and social research organization that applied CIT to education and other fields, thereby, establishing its wide range of applicability. In 1960, Flanagan launched project Talent, wherein he interviewed 400,000 students all across the United States to capture their learning aspirations.

Subsequently, based on the findings of Talent, he launched another programme called PLAN (Program for Learning in Accordance with Needs). This led to the design of a comprehensive curriculum from grade 1 to 12 which was both individually cus­tomized and computer aided.

Flanagan’s focus was clearly not on competencies. However, it laid the foun­dation for a new way of looking at a job requirement. The CIT propounded by him became modified and assumed the form of BEIs which became an impor­tant tool for both mapping and assessment of behavioural competencies.

2. Robert White and David McClelland:

ADVERTISEMENTS:

The notion of human competence came to the forefront of Human Resource Development (HRD) through the concurrent works of these two psychologists. They are therefore looked upon as the rightful pioneers. Competency advocates such as – McClelland, Baldwin, Bronfenbrenner and Strodtbeck have argued that assessments of employees’ competencies provide an effective method for predict­ing job performance.

Robert White identified a term that he called competence. David C. McClelland built on and extended White’s work. The starting point of the competency movement in its present shape is often identified with an article published by McClelland of Harvard University.

In 1973, McClelland initiated the competency modelling movement by publishing an article called ‘Testing for Competence Rather than Intelligence’, wherein he established that traditional achievement and intelligence scores may not be able to predict job success and what is required is to profile the exact competencies required to perform a given job effectively and measure them using a variety of tests.

He challenged the value of intelligence testing and the resultant use of an ‘intelligence quotient’ or IQ score, as a predictor of successful performance. McClelland said that IQ and per­sonality tests were poor predictors of competency.

He observed that although performance is influenced by a person’s intelligence, other personal characteristics, such as – core motives and self-image, operate within the individual to differentiate successful from unsuccessful performance in a job role. He felt that companies should hire on the basis of competencies rather than aptitude test scores.

In 1973, McClelland was engaged by the United States Information Agency (USIA) to explore new methods for predicting job performance as compared to the prevalent method of taking the help of traditional intelligence and aptitude tests. McClelland and his associates took the help of the administration at USIA to identify two groups of employees, each having a strength of 50.

ADVERTISEMENTS:

One group comprised employees with an outstanding track record, whereas the other had people who did an adequate job. Employees in the two groups were asked to describe incidents where they had done either an outstanding job or had fared poorly. The incidents revealed competencies and an analysis of the findings led to identification of the competencies that differentiated the group of outstanding employees from the other group. Tests were then designed to capture the identi­fied competencies.

The tests were validated by application on another pair of employee groups—one group having employees with outstanding track record and the other having employees who perform adequately. It was found that the test scores for the two groups differed at statistically significant levels.

The com­petencies identified by McClelland and his associates turned out to be quite different from the job-related attributes which had been identified by an expert panel for United States Information Service earlier on and which provided the basis for the prevalent selection process.

The work of McClelland and his associ­ates resulted in the creation of a research process called the job competence assessment method (JCAM). There are six stages one needs to work through in JCAM.

These are as follows:

1. Establish the performance criteria

ADVERTISEMENTS:

2. Identify people for the criterion samples

3. Collect data using BEIs or other methods

4. Analyse data and define the competencies

5. Validate the model

6. Design applications.

This method relies on the use of rigorous empirical research, which helps to determine what job competencies differentiate exemplary from average job performance. Once the competencies have been determined, they are used to con­struct the job competency model by researching the attributes of the exemplary performers.

ADVERTISEMENTS:

The said model then has to be validated by comparing test scores with actual performance in the job. McClelland’s consulting company, namely McBer and Associates, in collaboration with the American Management Association fea­tured in a pioneering study on competency in the 1970s.

The purpose was to find out the competencies that lead to exemplary managerial performance. This study that spanned over five years covered over 1,800 managers in the United States.

They identified the following five managerial competencies:

1. Specialized knowledge

2. Intellectual maturity

3. Entrepreneurial maturity

ADVERTISEMENTS:

4. Interpersonal maturity

5. On-the-job maturity.

Out of these, specialized knowledge seems to be possessed by both fully successful and exemplary performers. But the other four competencies clearly differentiated the two groups of performers.

JCAM sanctions the use of BEI for competency assessment. BEI, which is a modified form of the CIT designed by Flanagan, is considered to be a highly reliable and valid resource for measuring competencies. One of the conventional approaches for measuring competencies would be to design competency tests.

Although researchers have developed many such tests, acceptability of such tests have always remained an issue. However, BEI as an assessment tool has achieved wide acceptability. Various studies conducted by consulting companies like McBer and Company have shown that competency measures drawn from such interviews could predict job success for high-level executives.

3. Richard Boyatzis:

In 1982, Richard Boyatzis wrote the first scientific and well-researched book on competency modelling, namely The Competent Manager – A Model for Effective Performance. According to Boyatzis, one needs to examine the person in the job and not only the job, and look for that ‘underlying characteristic’ that makes ‘superior or effective performance’ possible. He distinguishes between levels and types of competencies.

Competencies exist within individuals, at unconscious, conscious and behavioural levels. Competencies are made up of motive, trait, self-image, social role and skill. They are generic, that is, apparent in many forms of behaviour or different actions. They represent what people can do, not neces­sarily what they do.

Boyatzis differentiates between the different levels and dif­ferent types of competencies and arrives at 21 characteristics of statistically significant effective performance which are not unique to the product or service of the organization. He evolved this competency framework using, among other tools, the BEI technique.

These 21 characteristics are classified into the following six competency clusters:

1. A focus on goal and action

2. Leadership

3. Human resource management

4. Directing subordinates

5. Focus on others

6. Specialized knowledge.

Boyatzis emphasized the interdependence of effective job performance with the individual’s competences, the demands of the job and the organizational envi­ronment, and identified threshold and differentiating competencies. The approach of Boyatzis has been widely adopted and variously interpreted.

Boyatzis is accredited to have contributed majorly to perfecting the applica­tion of thematic analysis in competency modelling. Thematic analysis seeks to bring together the worlds of qualitative and quantitative research. Qualitative research in the context of competencies delves into sensing and recording com­petency themes from available data and information.

Quantitative research uses statistical methods on available data to arrive at meaningful conclusions. Thematic analysis facilitates objective interpretation and recording of themes allowing their subsequent conversion to a numerical form suitable for using statistical methods advocated by quantitative research.

He has also worked closely with Daniel Goleman to evolve the concept of emotional competence. A model of emotional and social competency inventory (ESCI) created by them. There are 14 competencies pertain­ing to two clusters, namely cognitive and emotional.

Boyatzis along with Patricia McLagan also made significant contributions in the area of competency development. They have been credited to have designed many training and development interventions for several organizations seeking to strengthen ‘differentiating competencies’.

Another noteworthy contribution by Boyatzis in the area of competency development has been the application of intentional change theory. Boyatzis has maintained that the ideal self and personal vision drive the process of intentional change. He has delineated a five-step approach that defines the process. He called them ‘Five Discoveries’.

These are as follows:

1. The ideal self and a personal vision

2. The real self and its comparison to the ideal self

3. A learning agenda and plan

4. Experimentation and practice with the new behaviour, thoughts, feel­ings or perceptions

5. Trusting relationships that enable the person to experience each discovery.

Information and communications technology (ICT) has found use in manage­ment development programmes for developing leadership competencies.

4. Patricia McLagan:

She used the concept of competency to integrate all aspects of human resource development activities in an organization. She came up with the idea of using competency studies to bring about a holistic improvement in HRM systems in an organization.

The scope of this initiative included the following:

1. Recruitment and selection

2. Assessment

3. Individual development programme

4. Training curriculum design

5. Coaching, counselling, mentoring and sponsoring

6. Succession planning and identifying employees with high potential

7. Career pathing.

Along with Boyatzis, she directed training interventions which would be tailored to suit the uniqueness of the organizational situation defined by the difference in competencies between the exemplary and fully successful job incumbents.

McLagan has come up with a number of useful competency models using an outputs-driven model which continues to have a strong association with her. Her Models of Excellence and Models for HRD practice, which include the competency models for training and development and HRD professionals, have been sponsored and adopted by the American Society for Training and Development (ASTD). The models received a lot of attention, accolade and acceptance from practising professionals.

The research work related to setting up the models for HRD practice was unique for its size and complexity. The study had a four-stage process where three research groups worked independently.

The groups were as follows:

1. The task force comprising 22 prominent individuals from the field of HRD

2. A group of 10 members from the ASTD’s organizational development (OD) professional practice area

3. Ten staff members from McLagan International.

The result of the first phase of study identified future forces affecting HRD professionals, HR roles, HR deliverables and HR competencies necessary to deliver results.

In phase 3, a survey form was mailed to 705 role experts for capturing details about their individual roles. In phase 4, a second-level survey was con­ducted on 473 experts to seek out a greater level of detail.

Finally, the report tabled by McLagan (1989a) included the following:

1. Future forces having a potential impact on the HRD field and HRD roles

2. Roles of HRD practitioners

3. Outputs of HRD

4. Competencies of HRD practitioners

5. Quality requirements for each output

6. Ethical issues related to HRD work.

The project and its findings received a lot of attention, accolade and acceptance from practising professionals.

5. Lyle and Signe Spencer:

Their book Competence at Work – Models for Superior Performance which was pub­lished in 1993 is described as one of the most comprehensive books on compe­tency modelling. It has also given a detailed account of how to design and carry out competency assessment using BEI.

Spencer and Spencer have done a com­mendable job of building up on the work of Flanagan, McClelland, Boyatzis and the research projects undertaken by McBer & Associates.

According to Spencer and Spencer, the primary goal of using competency assessments to evaluate individuals is to improve job performance. Assessments are carried out for a variety of purposes, including selection, performance man­agement, compensation and succession planning.

In the case of performance management, the use of competency assessment appears to be fairly widespread. They have also postulated that competency includes both implicit and explicit traits that are related to understanding and prediction of work performance. Competency was further categorized into five groups – motive, trait, self-concept, knowledge and skill.

They have also provided powerful insights into how to design a competency model for an organization relying essentially on the JCA methodology popularized by McClelland. The process begins by identifying those jobs that have ‘high value in relation to the organization’.

After selecting such jobs a group of exemplary and fully successful performers is selected, BEI is conducted on them, competency information is collected, comparison of exemplary and fully successful performers is done and a draft competency model is put up showcasing the differentiating competencies. The draft model subse­quently needs to be validated before the same gets adopted with or without modifications that may be necessitated.

Spencer and Spencer have also come up with a number of competency dic­tionaries where competencies are organized into clusters. These can be used as a reference point while coding BEI data. The dictionary can also be used for the purpose of benchmarking. Competence at Work has got six competency clusters listed in a dictionary. There are a total of 19 compe­tencies listed in the dictionary.

This dictionary was the result of a significant research project that worked on prior-research-based data. The dictionary was derived from an original list of 286 partially overlapping existing competency models. There were around 760 dif­ferent behavioural indicators listed in these models.

However out of these, around 360 indicators related to 21 competencies accounted for 80 per cent to 98 per cent of the behaviour reported in each model. The rest rarely observed competencies were termed ‘unique’ and dropped from the consideration set. Thus the preliminary competency dictionary comprised of 21 competencies having 360 behavioural indicators.

The dictionary was subsequently refined by interviewing subjects from 21 different countries using BEI. Categorization has been done across geographies such as – Asia, Africa and South America.

Competencies have also been categorized across varied domains which include the following:

1. Industry

2. Government

3. Military

4. Health care

5. Education

6. Religious organizations.

They have also designed competency models for the following functional groups namely:

1. Entrepreneurs

2. Technicians and professionals

3. Salespeople

4. Helping and human service workers

5. Managers.

Spencer and Spencer’s contribution can be best summarised as building on the work done by the predecessors through relentless application and effectively demonstrating the ‘how to’ aspects of competency mapping and modelling thereby making it popular.

6. Gary Hamel and C. K. Prahalad:

Gary Hamel and C. K. Prahalad (1994) propounded the idea of organizational core competency. In their book competing for the Future, core competencies have been described as those capabilities which transcend individual performance and acquire the form of an organizational strength which is strategic in nature and contributes to making the organization competitive in its environment.

Therefore, organizations have to identify, develop and manage organizational core competencies.

Hamel and Prahalad have identified three characteristics of a core competency, which are as follows:

1. It provides potential access to a wide variety of markets.

2. It makes significant contribution to the perceived customer’s benefits of the end product.

3. It is difficult for competitors to imitate.

Core competencies and market opportunities can help organizations delineate their strategies. Hamel and Prahalad have advocated four possible options in terms of strategies.

Individual job competencies add up to determine organizational core compe­tencies. Well-defined core competencies and market potentials help in determin­ing the appropriate strategies for the organization. Therefore, one needs to not lose sight of the big picture in terms of how individual competencies contribute to the realization of the organization’s strategies in meeting its growth aspira­tions.

Equally significant is the need to acquire new competencies to meet the strategic imperatives of launching a vastly enhanced premier offering to existing as well as new market segments. In short, competency modelling should be viewed as a tool to develop the competitive advantage of the organization.

7. Dave Ulrich:

Dave Ulrich (1997) has further expanded on the work done by Hamel and Prahalad. Ulrich used the term organizational capability to represent skills, abili­ties and expertise that lie within the organization.

Traditionally, an organization’s competitiveness is developed by its uniqueness stemming from three capabilities which are as follows:

1. Marketing or strategic

2. Financial

3. Technological.

Ulrich propounded the concept of organizational capability as the crucial fourth dimension that contributes to competitive advantage and therefore requires care­ful nurturing. He developed a model on HR competencies.

Ulrich’s model shows six roles that effectively showcase what is needed from a HR professional. Along with knowledge and ability, the HR professional needs to be aware of the roles that he or she is expected to play. So Ulrich’s competency domains or clusters sound like roles.

Brief elaboration on each of the six roles is provided below:

i. Credible Activist:

The HR professional have to be both credible (respected) and active (holds a point of view, challenges assumptions— has voice and energy). If there is only credibility, there is no impact; if there is only activeness, there is resistance.

ii. Culture and Change Steward:

The HR professionals are supposed to uphold the culture by weaving cultural standards into HR policies and practices. They are also supposed to sensitize managers about how their actions affect the culture. When changes are needed in the culture—often driven by forces outside the organization, for example, shift in customer preferences or a change in technology—HR profes­sionals need to drive change from the forefront by championing the change programmes and initiatives.

iii. Talent Manager/Organizational Designer:

The HR professionals by virtue of their specialized knowledge would be expected to own two pillars of the HR system, namely talent management and organization design. They must do full justice to this role and in the process ensure that both talent management and organization design get perfectly harmonized.

iv. Strategy Architect:

The HR person needs to be fully involved in the overall strategy formulation of the organization by linking the internal organization to the expectations of the external customer. Once the strategy is formulated, the HR professional needs to be alert and active to ensure its implementation by constantly gauging external changes and their impact on the people and processes and initiating needed actions.

v. Operational Executor:

One of the primary responsibilities of the HR person would be to ensure smooth operations. If HR operations are carried out in a flawless manner and are consistently in keeping with HR policies, then it helps in establishing the credibility of the HR function. Unfortunately the reverse is also true.

vi. Business Ally:

The HR professionals must have in-depth knowledge about the key business processes. This knowledge should come in handy for them to lend necessary support and assistance to business units so that they are able to meet their revenue targets.

HR professionals play an important role at the intersection of people and business issues. The model implies that they would have to keep their focus on both business and people. Any leaning this way or that way would lead to an imbalance. The credible activist is located at the crux—a position of great significance—if this competency domain is weak, it would make playing the other roles difficult.

Built through sustained research effort over a period of 10 years, 1988-2007, surveying around 10,000 subjects, the model prepared by Ulrich continues to inspire HR professionals to realize their fullest potential.

Other Notable Contributors:

Many researchers have backed the pioneers with data and evidence. A few of them have found mention in the following paragraphs with a brief account of their work.

David D. Dubois has defined competence as the employee’s capacity to meet or exceed a job’s requirements by producing the job outputs at an expected level of quality within the constraints of the company’s internal and external environ­ments. He adopted Boyatzis’ definition of competency preferring to see it more as an underlying characteristic that can cause superior performance in a job.

Dubois has contributed significantly in finding answers to the challenges in the area of competency development. Arguably, developing the underlying charac­teristics which are purely behavioural can turn out to be an arduous proposition. Dubois has suggested that for such competency-centric training to be successful, the trainer must use all three modes of learning, namely affective, cognitive and psychomotor.

Dubois along with Rothwell had co-authored a book titled Competency-Based Human Resource Management where they have spoken about the inefficacy of force fitting people into jobs. They have also provided worthwhile tools for identifying potential high per­formers based on differentiating competencies which would be of aid to HR professionals and line managers.

Michael Hammer and James Champy in their bestselling book Reengineering the Corporation (1993), have presented the difficulty in improving established workplace practices. Instead of production time, success is now measured on quality issues, such as cycle time, error rate and resultant internal and external customer satisfaction.

Hence, according to Hammer and Champy, today’s work­ers require a new set of competencies. Hammer and Champy have advocated business process reengineering to remove all processes that do not add value to the customers or are not aligned with the core competency of the organization.

So individual competencies have to be, as far as possible, aligned with the core competencies of the organization. This is in keeping with this spirit of efficiency and thrift.

Charles Woodruff has a notable contribution to theorization around compe­tency. He has distinguished between competence and competency. According to Woodruff, competence is a performance criterion that has to be achieved by the person. Competencies are the personal characteristics that make work perfor­mance possible.

In his own words, ‘Competency is a set of behaviour patterns that the incumbent needs to bring to a position in order to perform the job with competence’. Few other researchers joined him subsequently in this debate. The concept of competence does not get discussed so much now, but it did help people to develop greater clarity about the concept of competency and particu­larly appreciate the shift towards the job holder and away from the job.


Competency – How to Measure Competency? (Steps)

1. About Testing:

A psychological test is essentially an objective and standardized measure of a sample of behaviour. The sample of behaviour is described with categories or scores. The psychological tests are widely used in organizations. Arguably, personality wise there are significant differences among individuals and the organizations also need this diversity Individual differences in personality can influence work behaviour in the organizations.

An individual may differ from another in respect of traits, motives, ability and skill. This in turn will influence work outcomes. Similarly, people with different personalities will inter­act and behave differently with their bosses, colleagues and others.

Psychological tests are used in organizations primarily for:

1. Selection and classification of industrial personnel and

2. Assessment of potential or competencies of existing employees.

Assessment of individual differences has a very long history.

Francis Galton (1822-1911) of England was the first scientist to undertake sys­tematic and statistical investigations of individual differences. He developed a variety of tests for sensory discrimination for measuring a person’s intellect. Galton was also a pioneer in the application of a rating scale and questionnaire methods. Galton also started using statistical tests in analysis of the data on individual differences.

Wilhelm Wundt founded the first psychological laboratory in 1879 in Leipzig, Germany. In 1862, he attempted to measure the speed of thought with the thought meter. In Germany, psychological tests were introduced in 1885 to test people for brain damage.

In the year 1850, United States began using psychological tests in civil service examination. In the year 1890, American psychologist, James Cattell developed a ‘mental test’ to assess college students. The test included measures of strength, resistance to pain and reaction time.

Group testing was started to meet a pressing need. World War I produced the necessity for quick classification of incoming recruits. When the United States entered World War I in 1917, Army Alpha test and Army Beta test were developed.

With the effort of psychologists and psychometricians, organizations started using psychological tests for selection of personnel.

2. Defining the Test:

Two aspects are very significant here:

1. Understanding the construct – In order to develop a new instrument, one must develop a clear idea about what the test will measure. A thorough literature survey needs to be done to understand the mean­ing of the construct (competency) for which the test is required to be developed.

2. Defining an appropriate domain of content or body of relevant mate­rial is essential. The domain includes both the material to be tested on and the population for whom the material will be suitable.

3. Selecting a Scaling Method:

The main objective of psychological tests is to assign numbers to responses so that the characteristics become measurable. Numbers designated to a scale follow certain rules. Numbers derived for any kind of instrument can be from any one of the four categories – nominal, ordinal, interval and ratio. Each category defines a level of measurement.

Another major decision has to be taken regarding the type of scale to be used. Some popular scales are Likert scale, Equal-Appearing Interval Scale, Semantic Differential Scale, Guttman scale and so on.

4. Constructing the Items:

Constructing test items is a laborious process and makes significant demands on the creativity of the test developer.

In case of instruments designed for measuring competencies, BEI transcripts may be a potential source of items. The parts of the BEI transcripts which got coded for competencies could with or without modification be used for develop­ing related items or questions.

In case, for some reason, BEI transcripts are not available, then it is advisable that a group of randomly selected employees be interviewed for critical incidents. The interview transcripts would have to be coded using the competency model of the organization. One would have to make sure that for every competency listed in the model, one gets enough number of connected critical incidents.

In case this is not achieved, the sample size would have to be increased and the exercise continued till a satisfactory number of critical incidents has been obtained connecting to each and every competency of the model.

The questionnaire drafted would undergo progressive refinement in the subsequent stages of the process. This process of modification would see several items get eliminated leading to a reduction in the size of the questionnaire. Therefore, it is appropriate to start with as many items as possible in the draft stage.

5. Gathering Data:

The draft questionnaire should be administered on a group of randomly selected employees. It is important that the purpose of the exercise be explained to the test takers before the test is administered. It should be explained to them that the HRD is trying to design a test that would be used for the purpose of assessing job applicants as a part of the selection process.

The test is not an attempt to assess the test takers by any stretch of imagination. This would allay any appre­hension that the test takers may be having. Fear and apprehension if not neutral­ized at the outset, can often force test takers to mask their genuine responses and provide smart answers to the test questions.

6. Item Analysis:

The draft questionnaire is only a first-cut prototype. This needs to be progres­sively refined by removing questions or items that are inappropriate. The first step in this endeavour is item analysis. Here the weak or ineffective items are identified.

Once the weak items are removed, the reliability and the validity of the test is likely to improve noticeably.

Item analysis involves a rigorous quantitative analysis which is detailed as follows:

1. Important Terms Associated with Item Analysis:

There are two important terms associated with item analysis.

Same are elaborated as follows:

i. Item Difficulty:

The measurement of item difficulty is appropriate for the test items which have right and wrong answers. For such items which do have a right answer, item difficulty simply means what fraction or percentage of respondents has answered that question correctly. If this is very high, it signifies that the item is too easy and ought to be removed.

The same recommendation would hold for those items that have been found to be too difficult by the respondents resulting in a small percentage of them answer­ing it correctly. Item difficulty (p) is measured by the number of respondents answering the question correctly divided by the total number of respondents.

The index of item difficulty varies from 0 to 1. An item with difficulty 0.3 is considered more difficult than an item with difficulty 0.5. Anastasi and Urbina (2017) have sug­gested that a p value between 0.3 and 0.7 is desirable.

ii. Item Discrimination:

Are the items able to adequately differenti­ate the high scorers from the low scorers? Item discrimination seeks to measure this differentiating ability of an item. An item discrimination index is a statistical index of how efficiently an item discriminates between persons who obtain high and low scores in the entire test.

Currently, there are over 50 such indices which find use in test construction. Although the indices vary in shape and form on account of differing procedures and assumptions, they tend to provide similar results. A low item discrimination index would suggest that the item should be dropped.

a. Item Discrimination by Use of Extreme Groups:

Here, one compares the item scores in two contrasting criterion groups and checks if the difference in scores is statistically significant. This method is applicable where the criterion is measured along a continuous scale, for example, test scores irrespective of whether there is a right answer for each item or not.

The U and L criterion groups are selected from the extremes of the distribution of total scores obtained in the test. Where the data follows the normal distribu­tion, one may consider taking up the top and the bottom 27 per cent of the sample to define the U and L criterion groups.

However for distributions that are flatter than the normal curve, it may be advisable to work with a figure higher than 27 per cent. Cureton (1957b) have even suggested that this could approach 33 per cent. Then for each item the mean scores of U and L groups are compared. If the scores of U and L groups do not differ significantly, then the item should be discarded.

The item discrimination index for a test item (where there are right and wrong answers) is calculated by using the formula,

D = (U-L)/N

Where, U is the number of respondents in the upper group who answered the item correctly, L is the number of respondents in the lower group who answered the item correctly, and N is the total number of respondents in the upper and lower group.

Item discrimination for each item can be obtained by sub­tracting the number of respondents giving the correct answer from the L group from the number of persons giving correct answer from the U group and dividing it by the total number of respondents in the U and L group. This can also be expressed in percentage form. This is recorded in column 6. Thus for item number 1, this value of item discrimination is (15 — 4)/20 = 0.55 or 55%.

This index has, over the years, been popularized by Johnson (1959); Ebel (1979) and Oosterhof (1976). The index has been denoted as U-L, ULD1, ULD or simply D. As an index of dis­crimination, the value usually gets expressed as a difference of percentage of correct answers. Quite obviously the D value can vary between +1 to -1.

Despite its simplicity, D is construed to be a fairly accurate measure of item discrimination bearing strong correlation with the findings of statistically more elaborate and rigorous methods.

Anastasi and Urbina (2017) have suggested that for a group of 60 respondents, a difference of 3 (corresponding D value is 15%) would be an indicator of adequate discrimination. So by looking at the table values, it appears that item 3 would have to be dropped on account of low discrimination,

b. Item Discrimination by Item-Total Correlation:

Another very common and classical index of discrimination is the ‘item-total’ correlation. It is the correlation of the respondents’ responses on each item and their total scores. The method is applicable for items with or without right answers. The rationale behind this approach is easy to comprehend. A reasonably significant correla­tion would indicate that scoring for that item is consistent with the overall scoring and the item ought to be retained.

7. Testing for Reliability:

Reliability refers to the consistency of scores obtained by the same subjects when they undergo re-examination using the same instrument or its equivalent or under different examining conditions. Measures of reliability make it possible to estimate what proportion of score variances can be attributed to chance.

The degree of reliability, in statistical terms, is measured through correlation.

There are several measures of reliability which have been detailed as follows:

1. Test-Retest Reliability:

The test-retest method measures the stability of the instrument. A measure is said to be stable if it gives consistent results with repeated measurement of the same person with the same instru­ment. Here the test scores are correlated with retest scores conducted sometime later.

The suggested interval between test and retest is two weeks to a month. The coefficient of reliability thus would simply be the correlation of the two sets of scores. The higher the reliability, more are the chances that results are not affected by operant conditions.

2. Alternate Form Reliability:

Sometimes retest scores may show an improvement as the subject gets more time to think on the same prob­lem. Sometimes the respondent may not think—he would simply recall his last answer and replicate the same. Either way the retest results can get compromised.

To circumvent this problem, the retest is carried out using a different form of the same test. These two tests are called parallel tests and contain equivalent items. Like before, the cor­relation between the two test scores would indicate the degree of reliability.

3. Split-Half Test Reliability:

Here the test items are split into two equal groups. Subsequently, scores of one half is correlated with the scores of the other half. In case a strong correlation is established, it shows that the results of the two parts of the test are consistent. This is a test of internal consistency.

An advantage over the previously discussed methods is that testing split-half reliability involves a single adminis­tration of a single form and therefore demands fewer resources. The formula for calculating reliability from the correlation score is –

R=2r/(1+r)

Where, R is the reliability of the instrument and r is the correlation coefficient between two halves of the test.

4. Cronbach’s Alpha and Kuder-Richardson Formula:

Here the test reliability is established by examining the performance of each item rather than using two split-half scores. The Kuder-Richardson for­mula is used when the test items are scored either as right or wrong or all-or-none. Cronbach’s Alpha determines the homogeneity of a test.

The reliability coefficient Alpha is in a way an average of all split-half coefficients arising out of different splitting of a test. When the test is composed of dichotomous items (right or wrong or all-or-none), Cronbach’s Alpha takes a special form which is known as the Kuder-Richardson formula 20 (KR-20).

8. Testing for Validity:

A test is considered valid if it measures what it is supposed to measure and does it really well.

Fundamentally, all measures of validity seek to examine the relationship between test performance and actually observable behaviour related to the char­acteristics that the test is trying to measure.

There are three measures of validity which have been briefly explained as follows:

1. Content Validity:

Content validity is determined by the degree to which the questions, tasks or items in a test are representative of the universe of behaviour the test was designed to sample. Content valid­ity is the sampling of the items drawn from a larger population of potential items that define what the test purports to measure. The relevance of the test item is judged by the experts.

2. Criterion-Related Validity:

For a sample of test takers, the test scores get compared with the criterion scores obtained from other sources.

Criterion validity is of two types:

a. Concurrent and

b. Predictive.

a. Predictive Validity:

In this approach, whether the criterion-based validity has been established or not can only be known after a considerable time interval. For example, the interview assessment scores of candidates can be compared with their performance in the job (criterion). In case a correlation can be established, the interview assessment process would stand validated.

b. Concurrent Validity:

It is relevant for cross-checking or estimat­ing the existing status rather than predicting the future outcomes. For example, if a test measures managerial potential, then the test can be administered on managers and the findings be compared with their actual performance ratings.

3. Construct Validity:

The construct validity of a test is the extent to which the test may be said to measure a theoretical construct or trait. Correlation between a new test and other similar standardized tests for the same trait can establish the construct validity of the test. Another way of determining construct validity could be through factor analysis in situ­ations where the new test measures multiple traits. Convergent and dis­criminant validation are other methods of assessing construct validity.


Competency – 2 Main Approaches for Assessing Competency: Perception-Based Assessment of Competencies and Assessment-Centre-Based Evaluation of Competencies

Mapping competencies is of little use unless one devises a mechanism for assessing the competencies. There are various methods prevalent for the assessment of competencies.

These methods can be broadly categorized under two heads:

A. Perception-Based Assessment:

Using methods such as – self-rating, peer rating, superior rating, customer rating or a combination thereof like 360 degree rating, a score is obtained for each competency that has been mapped for the position that the incumbent holds. Using a simplifying assumption that the rating scale is continuous, a simple average is often worked out to present an overall competency score.

Perception-based assessment could be fast, convenient and inexpen­sive because of the fact that no specialized tool or expertise is called upon for carrying out such assessments. However, the assessments are subjective and their accuracy is limited by the extent of the gap between perception and reality.

B. Assessment Done through Assessment Centres:

Assessment centres advocate the use of multiple methods such as – psychometric tests, in basket, group discussions, case studies, role plays to objectively assess the job holder for the various competencies that are associated with the position. Any bias that may have been induced by perception is weeded out by using multiple assessors.

A simple average of the scores can be worked out to present an overall competency score. Needless to say that if properly designed, keeping in mind the requirements of the job position, assessment centres could give very accurate results. However, because the assessment centres need specialized tools and resources, they are expensive.

A. Perception-Based Assessment of Competencies:

In its very basic form, a perception-based assessment would require incumbents to be rated on a competency model using a predefined proficiency scale.

There are a few popular forms of perception-based assessment.

They are as follows:

1. Self-Rating:

Incumbent himself fills in the rating sheet.

2. Peer Rating:

Peers—meaning people holding similar positions in the organization in terms of level and status. Distinction is sometimes made between ‘internal’ and ‘external’ peers. Internal peers are those who are from the same department and often reporting to the same superior. External peers are from other departments and more often than not have a different superior. Internal peers are considered knowledgeable peers and are qualified to assess technical and func­tional competencies.

3. Superior Rating:

Most prevalent form of assessment being the heart of most performance appraisal processes.

4. Subordinate Rating:

Somewhat uncommon way of assessment except in organizations where 360 degree assessment is quite established as an assessment practice. Can induce guarded ratings for fear of reprisal which could be purely imaginary. When launched for the first time, this can also attract resistance from superiors because of cultural dissonance.

5. Customer Rating:

Followed by some organizations like IBM for the sales function where the appraisal forms of sales representatives would be sent to their key customers. Competency rating forms can be sent not only to external customers but also to customers who are internal to the organization.

Advantages and Challenges of Perception-Based:

Assessment of Competencies:

Advantages and challenges of perception-based assessment of competencies are as follows:

1. Advantages:

Perception-based assessments can be carried out without external help as little specialized expertise is needed. It can also be rolled out without heavy preparation. The process can be run and completed in a short period of time. Thus, it is cheap, convenient and fast.

2. Challenges:

While perception-based assessment has advantages that make it popular, there are certain challenges that need to be addressed to make this form of assessment useful.

They are as follows:

i. Comprehension:

The rating forms should be easy to follow. There are two broad elements in the form – (i) competency definitions (ii) rating scales. Both should be easy to follow by ensuring that the language used is simple and all technical terms are clearly explained.

ii. Apprehension:

There is always a noticeable degree of apprehen­sion hinging around what could be the possible fallouts of this assessment. For example, when someone fills in the self-rating portion of the appraisal form, there could be this apprehension that a low assessment score may have adverse monetary impacts.

This may result in the subject giving a higher self-rating than what he views as reasonable and correct. Similarly, while rating a subor­dinate in an ‘open appraisal’ where the subordinate can view the rating by the superior, it can make the assessor apprehensive that a low rating may dishearten the subordinate. Thus, the superior may end up giving ratings higher than what is reasonable.

While rating a peer in a 360 degree process, there may be this fear that giving a high rating may result in the peer getting an advantage in career progression which could be at the expense of the rater. This may precipitate a rating which is lower than what is reasonable.

iii. Biases:

There are number of biases which can affect perception- based assessment.

Some of the more frequently occurring ones are given as follows:

a. Recency:

While assessing competencies, the rater is sometimes influenced more by critical incidents that happened recently. So if the recent happenings have been positive, the rating tends to be higher. Conversely, major setbacks in the recent past tend to lower the ratings.

b. Primacy:

While assessing competencies, the rater is influenced more by critical incidents that happened at the very beginning of their association. So if the rater had seen ample demonstra­tion of capability at the very start of their association, the rating tends to be high.

Conversely, if the impression at the start was bad, it tends to lower the rating. It is as though ‘first impression is the only impression.’ In a way, this type of bias is the opposite of recency bias.

c. Halo Effect:

The subject has real strength in a competency. While assessing this, the rater is deeply impressed; in fact, he gets so carried away that he ends up giving a high rating to all other competencies resulting in a high overall rating for this subject who in the eyes of the rater has assumed ‘godlike’ dimensions.

d. Horns Effect:

In a way, this could be viewed as the opposite of ‘halo effect’. The subject is really weak in a competency. While assessing this competency, the rater is influenced more by criti­cal incidents that deeply disappointed; in fact, he gets so agi­tated that he ends up giving a low rating for all the other competencies resulting in a low overall rating. In other words, the evaluator has recognized the devil in the subject.

e. Familiarity:

While assessing competencies, the rater is influ­enced by considerations of familiarity which may be both conscious and subconscious. For example, the rater and sub­ject may be from the same alma mater. The basis for familiarity could be manifold and would include but not be restricted to demographic characteristics such as – origin, language, cast, creed, religion, education, or psychographic or lifestyle charac­teristics such as social class, hobbies, interests and passions. Familiarity bias would tend to inflate ratings.

f. Contrast:

Ideally, all ratings should be absolute ratings based on objective considerations. Sometimes raters fall into the trap of comparing the subject with a peer group. If the peer group comprises many good performers, the subject tends to look weak in comparison and the rating drops.

However, if the peer group has a few weaklings, the same subject may look strong and the rating may go up. Readers having an academically bril­liant sibling may have experienced the adverse effects of con­trast bias where securing a first class looked like Failure to Such Evaluators.

B. Assessment-Centre-Based Evaluation of Competencies:

What is an assessment centre?

Assessment centre is a process where candidates are assessed to determine their suitability for taking up a given position or role. Needless to say, the candidates could be external applicants who have been recruited or internal applicants looking for a promotion or a lateral movement.

The process involves a multi-method approach to selection whereby candidates will complete a number of different activities and tests that have been specifically designed to assess the key competen­cies for the position that they have applied for. By evaluating their performance and responses, the assessment centre provides a clear understanding of the candidates’ strengths and developmental needs and evaluates their suitability for the position.

When assessment centres are run not for selection but for development, the process is termed as development centre (DC). So while assessment centres unearth competency gaps for individuals with the objective of taking a selection decision (in or out), DCs look at them more for devising befitting IDPs which would over a period of time eradicate the gaps completely.

Setting up an Assessment Centre:

The International Task Force on Assessment Centre Guidelines presents 10 fea­tures that must be present for a process to earn the status of an ‘assessment centre’. These constitute the essential job elements of an assessment centre.

1. Job analysis – Job analysis has to be carried out to identify all critical job dimensions.

2. Behavioural classification – What are the specific skill, knowledge and ability required to perform the above job dimensions? What are the corresponding associated behaviours; what would be the extent or level of competency that would be required?

3. Assessment techniques – What would be the techniques that would best suit the subjects to measure the above competencies? For exam­ple, decision may be taken for including an aptitude test.

4. Multiple assessments – These to be followed for the same dimension. Designing or selecting specific exercises with calibration for evaluation would have to be carried out. Here one would have to specify the tools for each technique in terms of content, duration and qualifying crite­rion. For example, if an aptitude test is to be included, one would have to decide which problems to keep in the test, duration and what would be the cut-off marks.

5. Simulations – Exercises should simulate the demands of the workplace as far as possible exactly.

6. Multiple assessors to be used – How many assessors to keep? What would be the criteria for being an assessor? Would there be internal or external assessors and in what proportion? Who would they be? Who would assess what?

7. Assessor training – This should be designed properly, keeping in mind the requirements of the assessment centre.

8. Recording behaviour – There should be facilities for audiovisual recording.

9. Reports – There should be a predetermined report format. Standard report format for evaluating management trainees for deciding their roles. Facilities for preparing and printing out reports at the venue must be provided.

10. Data integration – Method for integrating the findings across multiple assessors and techniques—this could be facilitated through consensus after a discussion between assessors.

Popular Tools and Techniques used in Assessment Centres:

Various tools and techniques are used in assessment centres.

Some of the popular ones are listed as follows:

1. Psychometric tests – These could be self-report tests using a multiple choice format; they could even be tests which require elaborate response such as describing a number of pictures or sentence comple­tion involving projective techniques.

2. Aptitude tests – These are typically written tests using a multiple choice format testing primarily our logical and lateral thinking ability.

3. Written test on job knowledge – These are written tests often using multiple choice and/or essay type questions testing the incumbent on his functional, technical and domain knowledge.

4. In basket – It simulates typical piles of paper that might confront a job holder on a particular day. It measures organizing skill and other managerial skills of the job holder.

5. Report writing – The subject is given some data or is made to see a video and is asked to make a report on the situation.

6. Case study and presentation – A job-related problem is presented to the incumbent who is subsequently asked to analyse, come up with a solution or solutions and present the same.

7. Role play – A job-related problem situation is presented and the incumbent is expected to respond and handle the problem through a role play.

8. Group discussion – Incumbents are asked to discuss a topic often in an unstructured manner without any designated leader in the group.

9. Group task – Participants are given a common task and they are asked to solve the task as a group.

10. Personal interview – Usually kept at the end of the assessment centre, the personal interview can be used to check or validate the findings from other modes of assessment. It can also be an independent tool of assessment covering a wide horizon starting from psychometric (BEI) to technical.

Specialized Assessment Considerations:

Additionally, for some processes, let us say for the armed forces, there would be a separate physical abilities test that would measure the strength and endurance of the incumbents through a battery of activities and tests each with clear-cut qualifying marks.

There could also be medical tests to determine the state of vision—checking for colour blindness and power of glasses.

How to make assessment centres more effective?

To make an assessment centre more successful, certain factors would have to be given due consideration.

They are as follows:

1. Communicating the Purpose:

The fear factor often affects response; either because there is a lack of transparency with regard to the pur­pose or when the purpose is to eliminate and select.

2. Appropriate Design:

Sometimes design weaknesses impede the effec­tiveness of the process. For example, trying to measure too many dimensions could lead to assessment errors. In fact, the ideal number that assessors can observe and rate without compromising accuracy is taken to be three dimensions. This is as per the findings of Lievens and Klimoski (2001).

Similarly, there could also be an issue with regard to not having the number of exercises. Here higher the number, the better is the degree of accuracy of measurement of the concerned dimension. However, the law of diminishing return puts a clamp to this number as problems of manageability become more predominant as the number of exer­cises goes up. There is research evidence to indicate that around five exercises would be ideal.

One also needs to ensure that there is adequate fidelity in the exer­cises that are being chosen. In other words, the exercises chosen should fit into the daily job requirements and demands. To cite an example, in basket fits in naturally to the job demands of an office manager but it does not fit in for a shift in charge who works on the shop floor mostly amidst workmen and machinery.

3. Differentiating between Administrative and Developmental Assess­ment Centres:

Administrative assessment centres select or promote qualified incumbents, whereas developmental assessment centres facilitate the development of the participants. Hawthorne (2011) has urged management to resist the temptation of ‘double dipping’— which is using the results from a developmental assessment centre to make decisions with regard to promotion and retention.

In fact, there are five clear differences between these two very contrasting types of assessment centres which must be borne in mind while designing the interventions.

The differences are listed as follows:

a. For DCs one has to choose those dimensions which are possible to develop.

b. The assessors in DCs are not only raters but they also provide developmental feedback and double up as coaches as well. Needless to say, the training they receive also has to be very different from what the assessors in the conventional assessment centres receive.

c. The exercises in DCs need to exactly capture the job requirements and simulate workplace realities in the best possible manner.

d. In a DC the feedback has to be very detailed. It also has to be given in a manner such that the incumbent does not lose heart but develops the enthusiasm to improve.

e. An IDP with clear development targets, schedules and means of development would be a product of the DC.

4. Effectiveness of the Assessors:

The assessors would have to be care­fully chosen. The choice of internal assessors should be made in such a manner that scope of bias is eliminated. The chosen external asses­sors should have some degree of familiarity with the industry domain of the assessees. Besides, the training of assessors should be done with sufficient care so that they are familiar with all the assessment tools.

5. Sharing the Report:

The report format should facilitate easy reading and comprehension. Instead of merely handing over the report, the assessees should be given some time to go through the report and then seek explanations or elaborations on their points of concerns. Assessors must have the desired technical knowledge and emotional intelligence to handle the questions even if it is an administrative assessment centre.

Popularity:

Ever since their large-scale deployment in AT&T in 1956, when 422 young managers were assessed for their potential to advance to higher levels of manage­ment, assessment centres became progressively more accepted in the corporate world after their initial adoption by the armed forces of nations such as Germany and Britain during World War II.

AT&T was followed by Standard Oil of Ohio in adopting the process. By the 1980s, in excess of 2,500 organizations based out of USA were using assessment centres which included the likes of IBM, Sears, Roebuck, General Electric and Caterpillar.

Today, their popularity is seen in developing economies like India with groups such as – TATA, Aditya Birla, Ranbaxy and R. P. Goenka endorsing assessment centres as an important enabler for using a competency-based approach to managing human resources.

Advantages and Disadvantages of Assessment-Centre-Based Evaluation:

Advantages and disadvantages of assessment-centre-based evaluation are as follows:

Advantages:

Properly designed and administered assessment centres are more reliable than traditional testing methods for evaluating supervisory, administrative and managerial potential.

The reasons could be manifold but some of them are listed as follows:

a. Proper job analysis is done to discern the exact job requirements and competencies;

b. Multiple methods are used to assess the same competency;

c. Role playing situations or case studies are used that simulate near real job conditions;

d. Process is long and rigorous ensuring no slip-ups;

e. Multiple assessors weed out bias;

f. Assessors are knowledgeable people selected both from within and outside;

g. The process is flexible and can scale up or down to assess incum­bents for any position;

h. Assessment centres usually register high criterion validity—there seems to be a positive correlation between success at the workplace and assessment scores. Predictive validity is between 0.37 and 0.41 which is higher than all other selection devices.

2. Disadvantages:

a. Time consuming and costly;

b. Low construct validity—does not always measure what it is sup­posed to measure;

c. Role plays are not real plays;

d. Psychometric tests can be ‘faked’;

e. The responses in assessment centres may be affected and not real.


Competency – Top 5 Applications of a Competency Model

The competency model can have multiple uses in an organization.

Some of them are as follows:

1. Learning and Development:

It can provide identification of the com­petencies employees need to develop in order to improve performance in their current job or to prepare for other jobs via promotion or transfer. Subsequently, individual development plan (IDP) could be implemented with a view to address the developmental requirements. Even designing academic and professional certification programmes to address competency gaps would be possible.

2. Recruitment and Selection:

Competency models help in job profil­ing. Therefore, attracting and selecting the right candidates become easier.

3. Performance Management:

It can also be a means for businesses to communicate performance expectations to their employees. Subsequently, it enables assessment of performance of individuals in their job roles.

4. Career Planning:

Competency model gives a clear idea about what level of competency is required at what level of the organizational hierarchy and in which functional area. So career planning becomes more objective.

5. Succession Planning:

Competency requirements at various levels of the organizational hierarchy and assessment results showcasing who is where can help in determining potential successors for all important position holders.

From a socio-economic perspective, they serve as a bridge between educators, businesses and other stakeholders who have invested in grooming and preparing students for today’s workplace challenges.