47 RESEARCH METHODOLOGY 3.1 Introduction In this chapter, we will discuss how we retrieved and analyzed the data. Next, the process for each model and methods used to achieve the objectives will be explained in detail. First of all, we will compute the value of claim ratio, RBC level, intellectual capital scores, and profitability ratios in terms of ROA and ROE percentages using each models formula based on the financial indicators stated in the financial statement from the annual report of takaful operators from the year 2015 until 2019. Next, we will compare the financial performance of the selected takaful operators based on claim ratio, RBC, and VAIC using the panel data regression method along with the tests to be performed in order to choose the model. Then, the relationship between financial performance and intellectual capital will be tested. Lastly, from these outcomes, we will compare which approach can give better prediction and analysis on the financial performance of takaful operators according to each takaful fund and can also provide improvement for their future financial performances. 3.2 Types and Sources of Data This study focuses primarily on the financial performance of insurance operators in Malaysia using rating systems like RBC, financial ratios such as claim ratios, ROA, ROE, and intellectual capital. Data regarding the takaful operators in Malaysia is obtained from the secondary information of the annual report provided on each company’s website. It consists of information such as financial statements, the 48 company’s policy, risk management framework, and corporate governance statement. From this report, we will use the data on the financial statement for a period of five years, which include from 2015 until 2019 in computing RBC, claim ratio, and intellectual capital methods accordingly. A total of 15 takaful operators were selected for the purpose of the study listed under licensed financial institutions by BNM. The selection process involves companies that have sufficient financial data throughout the study period. The following table displays list of selected takaful operators: Table 3.1: List of Takaful operators in Malaysia No Company’s Name 1 AIA Takaful Berhad 2 AmMetLife Takaful Berhad 3 Etiqa Family Takaful Berhad 4 Etiqa General Takaful Berhad 5 FWD Takaful Berhad 6 Great Eastern Takaful Berhad 7 Hong Leong MSIG Takaful Berhad 8 Prudential BSN Takaful Berhad 9 Sun Life Malaysia Takaful Berhad 10 Syarikat Takaful Malaysia Am Berhad 11 Syarikat Takaful Malaysia Keluarga Berhad 12 Takaful Ikhlas Family Berhad 13 Takaful Ikhlas General Berhad 14 Zurich General Takaful Malaysia Berhad 15 Zurich Takaful Malaysia Berhad 49 3.3 Descriptive Statistics Yet, the most frequent metric of central tendency is the mean. It is calculated by dividing the number of observations in the data frame by the values corresponding. The formula of mean is defined as below: ?̅? = ∑ 𝑥 𝑛 . (3.1) The standard deviation is a measurement of how much a collection of data deviates out of its average. It quantifies a distribution’s actual variation, with the bigger the dispersal or volatility, the larger the standard deviation as well as the size of the value’s dispersion over its average. Below is the formula for standard deviation: 𝜎 = √ ∑(𝑥−?̅?)2 𝑛−1 , (3.2) where 𝜎: value of standard deviation, 𝑥: each value in the dataset, ?̅?: mean of all values in the dataset, 𝑛: number of values in the dataset. A data point that is lesser than or equivalent to other values in the dataset is called the minimum. If we arranged the whole of our data from smallest to largest, the very first digit on the ranking would be the minimum. Meanwhile, the data point which is larger than or similar to all the other values in the dataset is referred to as maximum. The maximum would seem to be the final figure identified when we sorted through all our values in increasing order. 50 3.4 Claim Ratio A claim ratio will analyze the number of real payments that insurers or takaful providers spend out of the total premiums or donations that they obtain from the policyholders. This ratio, proposed by Rusmita (2018), has become one of the diagnostic measures used within the Early Warning System study to analyze an insurer’s financial condition by identifying credit restructuring inefficiencies. In this situation, a high ratio will reflect a reduced risk profile and a more productive business. The claim ratio represents the proportion of claims over premium earned, and yet it defines the insurer’s net profit (Foong & Idris, 2012). The formula to calculate the claim ratio is stated as: 𝐶𝑙𝑎𝑖𝑚 𝑟𝑎𝑡𝑖𝑜 = 𝑁𝑒𝑡 𝑐𝑙𝑎𝑖𝑚 𝑖𝑛𝑐𝑢𝑟𝑟𝑒𝑑 𝑁𝑒𝑡 𝑐𝑜𝑛𝑡𝑟𝑖𝑏𝑢𝑡𝑖𝑜𝑛𝑠. (3.3) The value of the net claim incurred and net contributions are provided in each company’s financial statement. Since the value of net claim incurred indicates the amount of claim that the company has paid throughout the financial year, thus the values stated in the financial statement are in bracket forms. In the meantime, the net contributions referred to the number of contributions received in terms of premiums paid by the policyholders. 3.5 Risk-Based Capital (RBC) Within the RBC approach, a takaful operator’s financial stability is determined through company capacity to fulfill the minimum reserve obligations with regard to its exposures. To calculate the RBC ratio, every insurance operator will be sorted out on the basis of the formula CAR technique (Mohamad et al., 2017). Through CAR, the financial strength of the operator will be evaluated whether it fulfills the supervisory 51 requirement of capital level implemented by BNM, which is 130% (Bank Negara Malaysia, 2013a). The CAR formula that evaluates the sufficiency of available capital to cover the necessary capital needed, which comes in the form of companies and shareholder’s funds stated below: 𝐶𝐴𝑅 = 𝑇𝐶𝐴 𝑇𝐶𝑅 𝑥 100%, (3.4) where 𝑇𝐶𝐴: Total Capital Available, 𝑇𝐶𝑅: Total Capital Required. 3.5.1 CAR for Takaful Operator BNM makes few requirements in order for a takaful operator to regulate according to Shariah principles. In order for Participant’s Risk Fund (PRF) to fulfill its requirements, every takaful operator must offer a loan without interest based on Shareholders Fund (SHF) if there happens to be any shortage in PRF. The TCA comprises the aggregate of Tier 1 and Tier 2 capitals with the exclusion of several deductions from funds. The components in TCA are stated in Table 3.2 below: Table 3.2: Total Capital Available (TCA) Components Tier 1 Tier 2  Issued and fully paid-up shares  Share premiums  Paid-up non –cumulative irredeemable preference shares  Capital reserves  Aggregate irredeemable preference shares  Obligatory capital loan stocks and other associated capital instruments 52  Retained profits  Surplus from the valuation of takaful funds  Irredeemable subordinated debts  Available-for-sale reserves  Revaluation reserves for self- occupied and other assets  General reserves  Subordinated term debts  Card from shareholders’ funds The TCA for a licensed takaful operator will be evaluated as: 𝑇𝐶𝐴 = 𝐶𝐴𝑆𝐹 + ∑ 𝑀𝑖𝑛[𝐶𝐴𝑖𝑎𝑙𝑙 𝑖 , 130% 𝑜𝑓 𝑀𝑎𝑥(𝑆𝑉𝐶𝐶𝑖, 𝐶𝑅𝑖)], (3.5) where 𝑖: takaful fund, 𝐶𝐴𝑆𝐹: Capital available in shareholder’s fund, 𝐶𝐴𝑖: Capital available in takaful fund 𝑖, 𝑆𝑉𝐶𝐶𝑖: Surrender value of capital charges in takaful fund 𝑖, 𝐶𝑅𝑖: Capital required from takaful fund 𝑖. The TCA value on each takaful fund is well-provided in the financial statement of every takaful operator. Therefore, there is no calculation on TCA. The TCR for takaful will be calculated based on capital charges on both takaful funds and shareholder funds as: 𝑇𝐶𝑅 = ∑ 𝑀𝑎𝑥[𝑆𝑉𝐶𝐶𝑖,𝑎𝑙𝑙 𝑖 𝐶𝑅𝑖] + 𝑀𝑎𝑥[𝑆𝑉𝐶𝐶𝑆𝐹, 𝐶𝑅𝑆𝐹], (3.6) where 𝑆𝑉𝐶𝐶𝑖: Surrender value of capital charges in takaful fund 𝑖, 53 𝐶𝑅𝑖: a summation of capital charges for credit risk, market risk, and takaful liabilities for each fund 𝑖, 𝑆𝑉𝐶𝐶𝑆𝐹: Surrender value of capital charges in the shareholder’s fund, 𝐶𝑅𝑆𝐹: consists of four types of risk capital charges on behalf of the shareholder’s fund, which are credit, market, expense liabilities, and operational. To compute TCR, each risk capital charge is computed individually using the formula for components of capital required to mitigate major risks provided in BNM’s takaful framework. Below is the list of formulas for every capital charge. 3.5.1.1 Credit Risk Capital Charges (CRCC) The purpose of the CRCC is to offset the possibility of damages arising from asset loss, associated financial difficulties, and the reluctance or refusal of a counterparty to completely comply with its agreed financial commitments by a licensed takaful operator. The CRCC formula expresses as follows: where 𝑖: takaful fund, 𝑐𝑟𝑒𝑑𝑖𝑡 𝑒𝑥𝑝𝑜𝑠𝑢𝑟𝑒𝑖: credit exposure for takaful 𝑖, 𝑐𝑟𝑒𝑑𝑖𝑡 𝑟𝑖𝑠𝑘 𝑐ℎ𝑎𝑟𝑔𝑒𝑖: credit risk charge for takaful 𝑖. 𝐶𝑅𝐶𝐶 = ∑ [ 𝑐𝑟𝑒𝑑𝑖𝑡 𝑒𝑥𝑝𝑜𝑠𝑢𝑟𝑒𝑖𝑎𝑙𝑙 𝑖 x 𝑐𝑟𝑒𝑑𝑖𝑡 𝑟𝑖𝑠𝑘 𝑐ℎ𝑎𝑟𝑔𝑒𝑖], (3.7) 54 3.5.1.2 Market Risk Capital Charges (MRCC) The goal of the MRCC is to alleviate the chances of financial damages resulting from the decline across the stock value of assets as a result of stock ownership, profit rate, commodity and currency risks, non-parallel fluctuations in between valuation of liabilities as well as the valuation of assets supporting obligations as a result of the accumulation of the profit rate and so its allocation of exposures to specific creditors or groups of resources, as defined in the BNM takaful context in paragraph 9 of Appendix II (Bank Negara Malaysia, 2013b) (see in Appendix K). Stated below is the formula: where 𝑖: takaful fund, 𝑚𝑎𝑟𝑘𝑒𝑡 𝑒𝑥𝑝𝑜𝑠𝑢𝑟𝑒𝑖: market exposure for takaful 𝑖, 𝑚𝑎𝑟𝑘𝑒𝑡 𝑟𝑖𝑠𝑘 𝑐ℎ𝑎𝑟𝑔𝑒𝑖: market risk for takaful 𝑖. 3.5.1.3 General Takaful Liabilities Capital Charges (GCC) Further and beyond the degree of security previously accounted for at the 75 per cent level of confidence, the GCC seeks to mitigate the risks of under-estimation of general takaful liabilities and unfavorable claims record for a licensed takaful operator engaging in general takaful operations. The formula for GCC is: where 𝑖: takaful fund, 𝑀𝑅𝐶𝐶 = ∑ [ 𝑚𝑎𝑟𝑘𝑒𝑡 𝑒𝑥𝑝𝑜𝑠𝑢𝑟𝑒𝑖𝑎𝑙𝑙 𝑖 x 𝑚𝑎𝑟𝑘𝑒𝑡 𝑟𝑖𝑠𝑘 𝑐ℎ𝑎𝑟𝑔𝑒𝑖], (3.8) 𝐺𝐶𝐶 = ∑[𝐶𝐿𝑖 𝑥 𝑟𝑖𝑠𝑘 𝑐ℎ𝑎𝑟𝑔𝑒𝑖] + [𝑈𝑅𝑅𝑖 𝑥 𝑟𝑖𝑠𝑘 𝑐ℎ𝑎𝑟𝑔𝑒𝑖], 𝑎𝑙𝑙 𝑖 (3.9) 55 𝐶𝐿𝑖: claim liabilities for takaful 𝑖, 𝑟𝑖𝑠𝑘 𝑐ℎ𝑎𝑟𝑔𝑒𝑖: risk charge for takaful 𝑖, 𝑈𝑅𝑅𝑖: Unexpired risk for takaful 𝑖, Claims liabilities and provision for unexpired risk (URR): Values determined in accordance with paragraph 10.2 in Appendix IV of BNM’s takaful framework (Bank Negara Malaysia, 2013b) (see in Appendix K). 3.5.1.4 Family Takaful Liabilities Capital Charges (FCC) The FCC intends to mitigate the risk of under-estimation of family takaful obligations and unfavorable claims record, in addition to the amount of provision already accounted for at the 75 percent confidence level, for a licensed takaful operator operating on family takaful operations. The Appointed Actuary must evaluate as well as certify whether every FCC and ECC is decrement-supported, including such lapse- supported and mortality-supported, and utilize the proper direction of stress factors appropriately when generating each FCC and ECC. To avoid any negative FCC and ECC occurrences, the stress direction chosen must be the one that generates the highest liability value in each risk component. In the corresponding actuarial analysis, the reason for identifying the stress factors for each product and each fund must be stated. Below is the FCC formula: where 𝑉∗: adjusted best estimate value of family takaful liabilities computed using the stress factors stipulated in Appendix V according to BNM’s takaful framework (see in Appendix M), 𝐹𝐶𝐶 = (𝑉∗ − 𝑉𝑎𝑙𝑢𝑒 𝑜𝑓 𝑓𝑎𝑚𝑖𝑙𝑦 𝑡𝑎𝑘𝑎𝑓𝑢𝑙 𝑙𝑖𝑎𝑏𝑖𝑙𝑖𝑡𝑖𝑒𝑠), (3.10) 56 𝑉𝑎𝑙𝑢𝑒 𝑜𝑓 𝑓𝑎𝑚𝑖𝑙𝑦 𝑡𝑎𝑘𝑎𝑓𝑢𝑙 𝑙𝑖𝑎𝑏𝑖𝑙𝑖𝑡𝑖𝑒𝑠: Value determined based on paragraph 10.3 according to BNM’s takaful framework at 75% confidence level (see in Appendix N). 3.5.1.5 Shareholder’s Fund Expense Liabilities Capital Charges (ECC) Despite exceeding the number of provisions previously accounted for at the 75 per cent level of confidence, the ECC aims to mitigate the risk under expenditure obligations and unfavorable experience of the shareholders’ capital expenditures. The formula for ECC is such following: where 𝑖: takaful fund, 𝑉∗: adjusted best estimate value of expense liabilities computed using Appendix V according to BNM’s takaful framework (see in Appendix M), 𝑉𝑎𝑙𝑢𝑒 𝑜𝑓 𝑒𝑥𝑝𝑒𝑛𝑠𝑒 𝑙𝑖𝑎𝑏𝑖𝑙𝑖𝑡𝑖𝑒𝑠𝑖: Value determined based on paragraph 10.3 according to BNM’s takaful framework at 75% confidence level (see in Appendix N). 3.5.1.6 Operational Risk Capital Charges (ORCC) The ORCC aims to reduce the risk of a licensed takaful operator losing money due to insufficient or failing internal procedures, people, and systems in monitoring takaful business. The ORCC also risks losses accounts resulting from Shariah’s non- compliance and a licensed takaful operator’s inability to fulfill its fiduciary obligations. 𝐸𝐶𝐶 (𝐹𝑎𝑚𝑖𝑙𝑦) = ∑ (𝑉𝑒∗𝑖 − 𝑉𝑎𝑙𝑢𝑒 𝑜𝑓 𝑒𝑥𝑝𝑒𝑛𝑠𝑒 𝑙𝑖𝑎𝑏𝑖𝑙𝑖𝑡𝑖𝑒𝑠𝑖)𝑎𝑙𝑙 𝑖 , (3.11) 𝑂𝑅𝐶𝐶 = 1% 𝑜𝑓 𝑡ℎ𝑒 𝑡𝑜𝑡𝑎𝑙 𝑎𝑠𝑠𝑒𝑡𝑠 𝑜𝑓 𝑡𝑎𝑘𝑎𝑓𝑢𝑙 𝑏𝑢𝑠𝑖𝑛𝑒𝑠𝑠, (3.12) 57 where total assets of takaful business: refers to assets in takaful and shareholder’s funds. 3.5.1.7 Surrender Value Capital Charges (SVCC) where 𝑖: takaful fund, 𝐴𝑔𝑔 𝑠𝑢𝑟𝑟𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑏𝑢𝑠𝑖𝑛𝑒𝑠𝑠 𝑖𝑛 𝑓𝑜𝑟𝑐𝑒𝑖: refers to shareholder’s fund and each family takaful fund established by a licensed takaful operator as per the fund segregation requirement stipulated in Guidelines on Takaful Operational Framework, 𝑙𝑖𝑎𝑏𝑖𝑙𝑖𝑡𝑖𝑒𝑠𝑖: the value of family takaful fund liabilities and the value of expense liabilities is determined in accordance with paragraph 10.3 of the BNM’s takaful framework (see in Appendix N). 3.6 Value Added Intellectual Capital (VAIC) The intellectual capital of an insurance firm is calculated by using VAIC that include three components of efficiency in terms of capital, which are human, structural, and capital employed (Lu et al., 2014). According to a study made by Pulic (2000), a list of steps will be applied to compute a VAIC score. The formula of VAIC and its components are: 𝑉𝐴𝐼𝐶𝑇𝑀 = 𝑉𝐴𝐶𝐴 + 𝑉𝐴𝐻𝑈 − 𝑆𝐶𝑉𝐴, (3.14) 𝑉𝐴𝐶𝐴 = 𝑉𝐴/𝐶𝐴, (3.15) 𝑉𝐴𝐻𝑈 = 𝑉𝐴/𝐻𝑈, (3.16) 𝑆𝐶𝑉𝐴 = 𝑆𝐶/𝑉𝐴, (3.17) 𝑆𝑉𝐶𝐶𝑖 = 𝑀𝑎𝑥[0, (𝐴𝑔𝑔 𝑠𝑢𝑟𝑟𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑏𝑢𝑠𝑖𝑛𝑒𝑠𝑠 𝑖𝑛 𝑓𝑜𝑟𝑐𝑒𝑖 − 𝑙𝑎𝑏𝑖𝑙𝑖𝑡𝑖𝑒𝑠𝑖)], (3.13) 58 where 𝑉𝐴𝐶𝐴: Value Added Capital Employed, 𝑉𝐴: value-added, 𝐶𝐴: capital employed, 𝑉𝐴𝐻𝑈: Value Added Human Capital, 𝐻𝑈: human capital, 𝑆𝐶𝑉𝐴: Value Added Structural Capital, 𝑆𝐶: structural capital, VACA is the value added to the capital employed, and CA is the capital employed that is equivalent to the gross asset book value minus the intangible asset. Thus, VA is the revenue generated by a company usually extracted from the two sources of human capital (HU) and structural capital (SC). Since wage is not considered as an expense because such forms of costs play a significant and important role in profit production and are known as capital, thus we may quantify value-added using the following expression: 𝑉𝐴 = 𝑂𝑃 + 𝐸𝐶 + 𝐷 + 𝐴, (3.18) where 𝑉𝐴: value-added, OP: operational profit, EC: employee cost, D: depreciation, A: amortisation. 59 We will calculate VAHU, which is the calculation of the human resource cost summarizing the contribution of every dollar paid as a laborer’s wage by dividing value- added over with the cost of human resources. Another component that needs to be added into the components of VAIC calculation is Value Added Structural Capital (SCVA) which means the efficiency of structural capital. The formula is as: 𝑆𝐶 = 𝑉𝐴 − 𝐻𝑈, (3.19) where 𝑆𝐶: structural capital, 𝐻𝑈: human capital. This method assumes that each increase in intangible asset value, such as intellectual capital, has a favorable impact on the financial performance of insurance companies (Iswati & Anshori, 2007). Thus, the intellectual capital will be tested in accordance with ROA and ROE as the profitability indicators of the takaful operator. When insurance firms receive a significant part of their profits from their wealth, such gains had already been spent on, impacting their efficiency. Therefore, we used the overall benefit on takaful operations, and that will not affect the result. 60 3.7 Profitability Ratios Measurement Profitability ratios seem to be a series of various calculations that are being used to assess a company’s capacity to generate profits. Since such proportions increase significantly as well as outperform rivals’ performance, they may deem positive. These ratios are calculated by comparing earnings to various expenditure groups upon that cash flow statement. The primary objective of such ratios is to assess how efficiently an organization may generate returns with proportion to the cost of invested equities as well as assets available. Unless the effects from these assessments are positive, this denotes that investment use may be reduced. In computing profitability, it may be ideal for contrasting a firm’s existing performance with the previous year’s similar time. The motive is because several firms provide temporary sales, thus driving the company’s profitability ratios to fluctuate significantly throughout the year. 3.7.1 Return on Assets (ROA) ROA is a measure of how profitable a company is due to its total assets. In 1919, ROA was established by the DuPont system, which highlights the firm’s capacity by offering an insight to a manager, customer, or consultant as to how effectively they can utilize their resources to achieve profits (Burca & Batrinca, 2014). According to Mehari (2013), this profitability ratio is likely among the most relevant specific metrics in evaluating insurers’ profitability and overall success since it reflects the revenues earned from investments they hold. The higher the ROA value, the better, and with less spending, the organization earns more revenue. Ishtiaq (2019) defined the formula to calculate ROA as below: 61 𝑅𝑂𝐴 = 𝑁𝑒𝑡 𝐼𝑛𝑐𝑜𝑚𝑒 𝑇𝑜𝑡𝑎𝑙 𝐴𝑠𝑠𝑒𝑡𝑠 . (3.20) 3.7.2 Return on Equity (ROE) In essence, the ROE ratio calculates the amount of return that the shareholders of a company’s capital stock earn on their shares. Equity return means the business’s ability to produce gains from its owners on the money it got. Yazid (2012) and Abu Hussin (2014) claim this ratio is an indicator of a firm’s profitability derived from the utilization of stocks. Generally, clients favor companies with higher ROEs. This can, nevertheless, be seen as a baseline to choose stocks from the same market. Benefit and revenue ratios differ greatly across industries. If a business wishes to offer dividends within the same industry and not retain the profit produced as empty cash, the ROE levels can differ. A study by Mansor (2016) also employed ROE to examine the relationship between risk and return. The term net income denotes an organization’s earnings after taxes as well as expenditures that have been deducted. Once obligations are cleared, shareholders’ equity seems to be the company’s owners’ outstanding right upon assets. The ROE formula is stated per: 𝑅𝑂𝐸 = 𝑁𝑒𝑡 𝐼𝑛𝑐𝑜𝑚𝑒 𝑆ℎ𝑎𝑟𝑒ℎ𝑜𝑙𝑑𝑒𝑟′𝑠 𝐸𝑞𝑢𝑖𝑡𝑦 . (3.21) 3.8 Analysis of Data Data analysis was done by multicollinearity analysis, selecting an appropriate panel data regression model, and running a few classical assumption tests. Panel data regression analysis, the importance of the coefficient 𝑅2, F test, Panel Corrected Standard Error Model, and Generalized Least Squares models and hypothesis testing 62 were additional treatment and tests conducted. Few steps will be run through in order to decide the best model estimation. 3.8.1 Multicollinearity Test When two or more independent variables are strongly correlated in a regression model with another, multicollinearity occurs (Daoud, 2018). The Pearson correlation matrix and variance inflation factor (VIF) will be used to analyze whether the study has a multicollinearity test problem or not. Whenever the correlation coefficient in the correlation matrix-valued at 0.8 or above, there is a drawback of collinearity (Gujarati & Porter, 2013). 3.8.2 Panel Data Regression Model Estimation Panel data regression is a composition between cross-sectional and time-series data in which such a unit cross-sectional area is assessed multiple times. This kind of regression offers larger pieces of information, greater degrees of freedom, as well as less, less multicollinearity amongst regressors, resulting in more accurate econometric estimations (Hsiao, 2002). Panel data is classified into two types which are balanced panel data and unbalanced panel data. The data is considered balanced if the cumulative unit length on every observation seems to be the same. However, when the amount of years is distinct, it is referred to as unbalanced (Zulfikar, 2018). A decent sample appears as a major advantage whenever employing unbalanced panel data. Yet, because each year’s performance varies depending on different amounts of data, the reliability of the estimations fluctuates with time. Despite balanced panel data, it is the opposite 63 because the accuracy of estimations rarely changes with time, but the disadvantage is that the sample size is much smaller (Kerstens, 2014). For regression analysis, a total of 17 takaful operators, which are made up of 11 family takaful operators and six general takaful operators, have been selected for a five years study period. However, three general takaful operators, which are Hong Leong MSIG Takaful Berhad, Prudential BSN Takaful Berhad and Zurich General Takaful Malaysia Berhad have been excluded based on the listwise deletion process due to containing missing data for the year 2017, 2018 and 2019 which then resulted a final total of 14 takaful operators involved in panel data regression analysis due to having sufficient data for five years period. The list of selected family and general takaful operators are as follow: Table 3.3: Selected Family and General Takaful Operators No Takaful Operators 1 AIA Public Takaful Berhad 2 AmMetLife Takaful Berhad 3 Etiqa Family Takaful Berhad 4 FWD Takaful Berhad 5 Great Eastern Takaful Berhad 6 Hong Leong MSIG Takaful Berhad 7 Prudential BSN Takaful Berhad 8 Sun Life Malaysia Takaful Berhad 9 Syarikat Takaful Malaysia Keluarga Berhad 10 Takaful Ikhlas Family Berhad 11 Zurich Takaful Malaysia Berhad 12 Etiqa General Takaful Berhad 64 13 Syarikat Takaful Malaysia Am Berhad 14 Takaful Ikhlas General Berhad There are few steps of the panel regression method that will be used to make comparisons on the result of financial performances of takaful operators. Below is the formula for panel data regression: 𝑌𝑖𝑡 = 𝛼 + 𝛽′𝑋𝑖𝑡 + 𝜀𝑖𝑡, (3.22) where 𝑌𝑖𝑡: the value of each dependent variable for each takaful fund on a particular year, 𝛼: Y-intercept, 𝛽′: the slope of the independent variables, 𝑋𝑖𝑡: the model of each independent variable for each takaful fund on a particular year, 𝜀𝑖𝑡: error term for each takaful fund on the particular year. There are four panel data regression models that will be assessed in this study. Each model will be either on a general takaful fund or family takaful fund, which then will be paired up according to each study period from 2015 until 2019. Below are the panel data regression equations that will be derived out: Model 1: ROA model for family takaful fund 𝑅𝑂𝐴𝑖𝑡 = 𝛽0 + 𝛽1𝐶𝑅𝑖𝑡 + 𝛽2𝑉𝐴𝐼𝐶𝑖𝑡 + 𝛽3𝑅𝐵𝐶𝑖𝑡 + 𝜀𝑖𝑡, (3.23) Model 2: ROE model for family takaful fund 𝑅𝑂𝐸𝑖𝑡 = 𝛽0 + 𝛽1𝐶𝑅𝑖𝑡 + 𝛽2𝑉𝐴𝐼𝐶𝑖𝑡 + 𝛽3𝑅𝐵𝐶𝑖𝑡 + 𝜀𝑖𝑡, (3.24) Model 3: ROA model for general takaful fund 𝑅𝑂𝐴𝑖𝑡 = 𝛽0 + 𝛽1𝐶𝑅𝑖𝑡 + 𝛽2𝑉𝐴𝐼𝐶𝑖𝑡 + 𝛽3𝑅𝐵𝐶𝑖𝑡 + 𝜀𝑖𝑡, (3.25) 65 Model 4: ROE model for general takaful fund 𝑅𝑂𝐸𝑖𝑡 = 𝛽0 + 𝛽1𝐶𝑅𝑖𝑡 + 𝛽2𝑉𝐴𝐼𝐶𝑖𝑡 + 𝛽3𝑅𝐵𝐶𝑖𝑡 + 𝜀𝑖𝑡, (3.26) where 𝐶𝑅: claim ratio, 𝑉𝐴𝐼𝐶: value-added intellectual capital, 𝑅𝐵𝐶: risk-based capital, 𝑖: type of takaful fund, 𝑡: year, 𝛽0: constant, 𝛽1, … , 𝛽3: coefficient of independent variables, 𝜀: error term. This panel data regression analysis will be analyzed using a statistical software called STATA. There are many models listed in the panel data method, which are the common effect model, fixed-effect model, and random effect model. Firstly, the Chow Test, Hausman Test, and Breusch-Pagan Lagrange Multiplier (LM) Test will be tested prior to choosing which models to use. 3.8.2.1 Random Effect Model (REM) Another name for this model is Error Components Model (ECM), and it involves time-invariant factors. This model assumes the variance within the family, and general takaful funds are considered to be stochastic as well as uncorrelated with the regressor in this model, which results in the time-invariant parameters being able to fulfill the function of predictors. Since this model enables greater involvement of independent 66 factors, which leads all samples in the group to have the same magnitudes, it contains lesser inputs compared to the FEM model. 3.8.2.2 Fixed Effect Model (FEM) This model is also called Least Squares Dummy Variable (LSDV). This model excludes time-invariant factors, and this model preserves heterogeneity as it enables both profitability ratios in family and general takaful funds to carry their own endpoint while keeping the slope unchanged. As a consequence, this technique can determine the independent variable’s overall influence on the dependent factor. 3.8.2.3 Common Effect Model (CEM) This model uses the method of Ordinary Least Square (OLS) to integrate both data in terms of cross-section and time-series and, which is also the reason why it is known as pooled OLS model. This model is simpler compared to the other two previous models since it implies that all data have been combined. As such, the gradient and intercept are consistent throughout units as well as durations. In this study, this model indicated that both family and general takaful funds produced similar profitability outcomes from the perspective of using claim ratio, RBC, and VAIC approaches. Nevertheless, this model lacks inconsistency of being heterogeneous since it integrates both profitability ratios outcomes in family and general takaful funds, which leads to consideration of the other models in the best model selection for this study. 67 3.8.3 Statistical Tests for The Best Model Selection 3.8.3.1 Hausman Test The Hausman test is used for a data analysis model selection panel between the fixed effect model and the random effect model. The Hausman test hypothesis statement is: 𝐻0: 𝑅𝑎𝑛𝑑𝑜𝑚 𝐸𝑓𝑓𝑒𝑐𝑡 𝑀𝑜𝑑𝑒𝑙, 𝐻𝑎: 𝐹𝑖𝑥𝑒𝑑 𝐸𝑓𝑓𝑒𝑐𝑡 𝑀𝑜𝑑𝑒𝑙. If the p-value is > 0.05, 𝐻0 would be approved. If there is a p-value < 0.05, then 𝐻0 will be rejected. Following is the formula for the Hausman test: 𝐻𝑎𝑢𝑠𝑚𝑎𝑛 𝑡𝑒𝑠𝑡 = (?̂?𝐹𝐸−?̂?𝑅𝐸) 2 𝑉𝑎𝑟(?̂?𝐹𝐸)−𝑉𝑎𝑟(?̂?𝑅𝐸) , (3.27) where ?̂?𝐹𝐸: Actual estimated value of fixed effects model, ?̂?𝑅𝐸: Actual estimated value of a random-effects model. 3.8.3.2 Breusch-Pagan Lagrange Multiplier (LM) Test This Lagrange Multiplier Test (LM) will be used to determine which model is between the random effect model and the common effect model to be used. The chi- square value will determine the acceptance of the common effect model. The hypothesis testing will be as follows: 𝐻0: 𝐶𝑜𝑚𝑚𝑜𝑛 𝐸𝑓𝑓𝑒𝑐𝑡 𝑀𝑜𝑑𝑒𝑙, 𝐻𝑎: 𝑅𝑎𝑛𝑑𝑜𝑚 𝐸𝑓𝑓𝑒𝑐𝑡 𝑀𝑜𝑑𝑒𝑙. If the p-value is > 0.05, 𝐻0 would be approved. If there is a p-value < 0.05, then 𝐻0 will be rejected. Below is the formula for the LM test: 68 𝐹 = 𝑅 ?̂?2 /𝑘 1−𝑅?̂?2/(𝑛−𝑘−1) , (3.28) where 𝑛: number of observations, 𝑘: number of regressors, 𝑅𝑢2: coefficient of determination regression. 3.8.3.3 Chow Test The Chow Test is a test most often used appropriately in analyzing panel data to decide if the variable is a fixed effect or a common effect. The Chow Test hypothesis statement will be as follows: 𝐻0: 𝐶𝑜𝑚𝑚𝑜𝑛 𝐸𝑓𝑓𝑒𝑐𝑡 𝑀𝑜𝑑𝑒𝑙, 𝐻𝑎: 𝐹𝑖𝑥𝑒𝑑 𝐸𝑓𝑓𝑒𝑐𝑡 𝑀𝑜𝑑𝑒𝑙. If the p-value is > 0.05, 𝐻0 would be approved. If there is a p-value < 0.05, then 𝐻0 will be rejected. Following is the formula for the Chow test: 𝐶ℎ𝑜𝑤 𝑡𝑒𝑠𝑡 = (𝑅𝑆𝑆𝑃−(𝑅𝑆𝑆1+𝑅𝑆𝑆2))/𝑘 (𝑅𝑆𝑆1+𝑅𝑆𝑆2)/(𝑁1+𝑁2−2𝑘) , (3.29) where 𝑅𝑆𝑆𝑃: combined regression line, 𝑅𝑆𝑆1: regression line before break, 𝑅𝑆𝑆2: regression line after break, 𝑘: explanatory variable, 𝑁: number of observations. 69 3.8.4 Classical Assumption Test 3.8.4.1 Normality Test Normality tests are typically performed to assess if a data collection seems to have a normal distribution and to calculate the probability that a random variable underlying the collected data is dispersed uniformly. This assumption can be satisfied using two types of normality which are the Skewness and Kurtosis test and the Shapiro Wilk test. According to Kim (2013), Skewness and Kurtosis test is applicable to many ranges of sample sizes, and since the sample size is greater than 50, Skewness and Kurtosis is the appropriate one. Meanwhile, the Shapiro-Wilk test is suitable for a sample size lesser than 50 (Mishra et al., 2019). A variable is considered to be asymmetric distribution if the skewness is approaching 0. However, the model is likely to be skewed positively if the skewness value falls closer to 1, which seems to be normally distributed. Since the total observations for model 1 and model 2 are 55, which are greater than 50, it is preferable to use the Skewness and Kurtosis test. For models 3 and 4 which the total observation of the general takaful fund for a 5-years period is 15, which falls below 50, the Shapiro-Wilk test will be applied. The hypothesis statement for both normality tests will be as following: 𝐻0: 𝑇ℎ𝑒 𝑑𝑎𝑡𝑎 𝑖𝑠 𝑛𝑜𝑟𝑚𝑎𝑙𝑙𝑦 𝑑𝑖𝑠𝑡𝑟𝑖𝑏𝑢𝑡𝑒𝑑, 𝐻𝑎: 𝑇ℎ𝑒 𝑑𝑎𝑡𝑎 𝑖𝑠 𝑛𝑜𝑡 𝑛𝑜𝑟𝑚𝑎𝑙𝑙𝑦 𝑑𝑖𝑠𝑡𝑟𝑖𝑏𝑢𝑡𝑒𝑑. If the p-value is greater than 0.05, the data is considered to have a normal distribution. Below is the formula for skewness and also excess kurtosis: 𝑍 = 𝑆𝑘𝑒𝑤𝑛𝑒𝑠𝑠 𝑣𝑎𝑙𝑢𝑒 𝑆𝐸𝑠𝑘𝑒𝑤𝑛𝑒𝑠𝑠 , (3.30) 𝑍 = 𝐸𝑥𝑐𝑒𝑠𝑠 𝑘𝑢𝑟𝑡𝑜𝑠𝑖𝑠 𝑆𝐸𝐸𝑥𝑐𝑒𝑠𝑠 𝑘𝑢𝑟𝑡𝑜𝑠𝑖𝑠 . (3.31) Meanwhile, the formula for the Shapiro Wilk test will be as follows: 70 𝑊 = (∑ 𝑎𝑡𝑦𝑡 𝑛 𝑡=2 ) 2 ∑ (𝑥𝑡−𝑦 𝑛 𝑡=1 ) 2, (3.32) where 𝑛: number of observations, 𝑎𝑡: tabulated coefficients, 𝑦𝑡: the value of the ordered sample, 𝑦: mean of the ordered sample. 3.8.4.2 Heteroscedasticity Test The issue of heteroscedasticity arises from uneven volatility. It goes contrary to the Classical Linear Regression Model (CLRM) assumption, which states that all disruption factors must have the same variation. It is indeed essential to ensure this model is adequate and does not contradict the linear regression assumption. The null hypothesis is homoscedasticity, which means that the variance, 𝜎2 of all disturbances, 𝜀 is constant. 𝐻0: 𝑉(𝜀𝑖) = 𝜎 2𝑓𝑜𝑟 𝑎𝑙𝑙 𝑖, 𝐻𝑎: 𝑉(𝜀𝑖) ≠ 𝜎 2𝑓𝑜𝑟 𝑎𝑙𝑙 𝑖. When the p-value falls below the 5% significance level, the null hypothesis should be denied since the variance of error terms, 𝜎2 are not constant. This implies that the model has uneven dispersion and is heteroskedastic, which requires adjustment to the model. If the p-value is greater than 0.05, the estimated model does not deal with any heteroscedasticity issue. The Breusch-Pagan Cook-Weisberg test will fulfil the heteroscedasticity assumption if the appropriate model is a random effect model. Meanwhile, the Modified Wald test will be applied if the best model chosen is the fixed effect model. 71 3.8.4.3 Autocorrelation Test The issue of autocorrelation or serial correlation arises when the model violates anyone’s CLRM’s principles. According to Asteriou and Hall (2019), omitted variables, model misspecification, and systematic measurement errors are all factors that contribute to autocorrelation. The expected volatility of the regression analysis will be affected as a consequence of such a serial correlation issue. The criteria would be that relationships and covariance amongst various disturbances all seem to be zero, implying that 𝑢𝑡 and 𝑢𝑠 are individually scattered. 𝐶𝑜𝑣(𝑢𝑡,𝑢𝑠) = 0 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑡 ≠ 𝑠. Once this criterion is invalidated, both error terms are not uniformly scattered, implying that periods 𝑡 and 𝑠 may be connected. This highlights the issue of autocorrelation. In conclusion, hypothesis testing seems to be not relevant. The Woolridge test is used in this research to detect autocorrelation problems in the model in order to prevent autocorrelation. The Woolridge test will fulfil the autocorrelation assumption. The hypothesis testing will be as follow: 𝐻0: 𝑁𝑜 𝑓𝑖𝑟𝑠𝑡 𝑜𝑟𝑑𝑒𝑟 𝑎𝑢𝑡𝑜𝑐𝑜𝑟𝑟𝑒𝑙𝑎𝑡𝑖𝑜𝑛, 𝐻𝑎: 𝐻𝑎𝑣𝑒 𝑓𝑖𝑟𝑠𝑡 𝑜𝑟𝑑𝑒𝑟 𝑐𝑜𝑟𝑟𝑒𝑙𝑎𝑡𝑖𝑜𝑛. The null hypothesis will be no autocorrelation. If the p-value is greater than 0.05, the estimated model does not deal with any autocorrelation issue. 3.8.4.4 Cross-Sectional Dependence Test Contemporaneous correlation is another term for cross-sectional dependence. Cross-sectional dependency issues must be addressed since they cause inaccuracy in 72 regression results. Pesaran’s CD-test will test the cross-sectional dependence to check if the residuals are associated within units. It is appropriate for panel data regression with relatively short time periods and high cross-section entities. The null hypothesis will be that error terms are uncorrelated as below: 𝐻0: 𝑇ℎ𝑒 𝑒𝑟𝑟𝑜𝑟 𝑡𝑒𝑟𝑚𝑠 𝑎𝑟𝑒 𝑢𝑛𝑐𝑜𝑟𝑟𝑒𝑙𝑎𝑡𝑒𝑑, 𝐻𝑎: 𝑇ℎ𝑒 𝑒𝑟𝑟𝑜𝑟 𝑡𝑒𝑟𝑚𝑠 𝑎𝑟𝑒 𝑐𝑜𝑟𝑟𝑒𝑙𝑎𝑡𝑒𝑑. If the p-value is greater than 0.05, the estimated model does not have any cross-sectional dependence. This test statistic has a basic assumption of normality and may be used to analyze both balanced and unbalanced panel data. 3.8.5 Panel-Corrected Standard Error (PCSE) Model There are two suitable models that are commonly used in resolving the heteroscedasticity, autocorrelation, cross-sectional dependence, and normality issues, which are Generalized Least Squares (GLS) and panel corrected standard errors model (PCSE) (Blackwell III, 2005). This PCSE is considered appropriate if the cross-section units, 𝑁, are greater than time-series 𝑡, otherwise GLS is suitable if the condition is the opposite (Hoechle, 2007). According to Beck (2008), since the standard errors need to be adjusted due to the presence of cross-sectional dependence and panel heteroscedasticity flaws, it is recommended to apply PCSE instead of OLS standard errors. 73 3.8.6 Generalized Least Square (GLS) Model According to Wooldridge (2002), GLS analysis seems to be more accurate than OLS estimation. It indicates that if the model encounters autocorrelation or heteroscedasticity issues, GLS estimation is significant to be used in this study. This is supported by Hsiao (2002), who stated that this estimation might be used to address the autocorrelation issue. The non-normality of the data resulting from the heteroscedasticity problem will be resolved within this approach (Gujarati & Porter, 2013). As a result, GLS estimation can tackle most of the autocorrelation and heteroscedasticity issues. 3.8.7 Coefficient of Determination (𝑹𝟐) The coefficient determination test is used to determine the total variability in explanatory factors as a result of the variability in predictor variables, as represented by the R-square value (𝑅2). The determination coefficient is between 0 and 1. A value close to 1 means that independent variables give most of the data required to forecast changes in the dependent variable. Following is the formula for computing coefficient determination, 𝑅2: 𝑅2 = ∑ (?̂?𝑖−𝑦) 2 𝑖 ∑ (𝑦𝑖−𝑦) 2 𝑖 , (3.33) where 𝑦𝑖: 𝑦 value for observation 𝑖, 𝑦: mean of 𝑦 value, ?̂?𝑖: the predicted value of 𝑦 for observation 𝑖. 74 3.8.8 F-test The F analysis is performed to determine the effect of independent variables on a dependent variable at the same time. The condition for such a judgement is that if the value of the F coefficient is greater than the value of the F table, it means that control variables have an impact on the dependent variable at the same time. Conversely, unless the F coefficient value is below the F table’s value, this means that there is really no parallel effect of the independent variables towards the dependent factors. The F-test formula is as follows: 𝐹 = 𝑅2/𝑘 (1−𝑅2)/𝑛−𝑘−1 , (3.34) where 𝑅2: coefficient of determination, 𝑘: number of regressors, 𝑛: number of observations. 3.8.9 Hypothesis Testing Hypothesis testing will be utilized in this study in order to fulfill objective three and to study the significance of the independent variables towards the financial performance of takaful operators in terms of profitability. The null hypothesis is rejected when the p-value is less than 1%, 5%, or 10%. Even though the significance level is at 10%, there is still no proof of a substantial link between financial profitability. As a consequence, at a minimum of 5% significance level, the three approaches, which are claim ratio, VAIC, and RBC, will seem to have a substantial association with financial 75 profitability for both family and general takaful. Below is the list of hypotheses for family and general takaful. Hypothesis 1 𝐻0: 𝐶𝑅 ℎ𝑎𝑠 𝑛𝑜 𝑠𝑖𝑔𝑛𝑖𝑓𝑖𝑐𝑎𝑛𝑡 𝑒𝑓𝑓𝑒𝑐𝑡 𝑜𝑛 𝑓𝑖𝑛𝑎𝑛𝑐𝑖𝑎𝑙 𝑝𝑟𝑜𝑓𝑖𝑡𝑎𝑏𝑖𝑙𝑖𝑡𝑦 𝐻𝑎: 𝐶𝑅 ℎ𝑎𝑠 𝑠𝑖𝑔𝑛𝑖𝑓𝑖𝑐𝑎𝑛𝑡 𝑒𝑓𝑓𝑒𝑐𝑡 𝑜𝑛 𝑓𝑖𝑛𝑎𝑛𝑐𝑖𝑎𝑙 𝑝𝑟𝑜𝑓𝑖𝑡𝑎𝑏𝑖𝑙𝑖𝑡𝑦 Hypothesis 2 𝐻0: 𝑉𝐴𝐼𝐶 ℎ𝑎𝑠 𝑛𝑜 𝑠𝑖𝑔𝑛𝑖𝑓𝑖𝑐𝑎𝑛𝑡 𝑒𝑓𝑓𝑒𝑐𝑡 𝑜𝑛 𝑓𝑖𝑛𝑎𝑛𝑐𝑖𝑎𝑙 𝑝𝑟𝑜𝑓𝑖𝑡𝑎𝑏𝑖𝑙𝑖𝑡𝑦 𝐻𝑎: 𝑉𝐴𝐼𝐶 ℎ𝑎𝑠 𝑠𝑖𝑔𝑛𝑖𝑓𝑖𝑐𝑎𝑛𝑡 𝑒𝑓𝑓𝑒𝑐𝑡 𝑜𝑛 𝑓𝑖𝑛𝑎𝑛𝑐𝑖𝑎𝑙 𝑝𝑟𝑜𝑓𝑖𝑡𝑎𝑏𝑖𝑙𝑖𝑡𝑦 Hypothesis 3 𝐻0: 𝑅𝐵𝐶 ℎ𝑎𝑠 𝑛𝑜 𝑠𝑖𝑔𝑛𝑖𝑓𝑖𝑐𝑎𝑛𝑡 𝑒𝑓𝑓𝑒𝑐𝑡 𝑜𝑛 𝑓𝑖𝑛𝑎𝑛𝑐𝑖𝑎𝑙 𝑝𝑟𝑜𝑓𝑖𝑡𝑎𝑏𝑖𝑙𝑖𝑡𝑦 𝐻𝑎: 𝑅𝐵𝐶 ℎ𝑎𝑠 𝑠𝑖𝑔𝑛𝑖𝑓𝑖𝑐𝑎𝑛𝑡 𝑒𝑓𝑓𝑒𝑐𝑡 𝑜𝑛 𝑓𝑖𝑛𝑎𝑛𝑐𝑖𝑎𝑙 𝑝𝑟𝑜𝑓𝑖𝑡𝑎𝑏𝑖𝑙𝑖𝑡𝑦