首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Abstract

All instruments designed to measure latent (unobservable) variables, such as patient-reported outcomes (PROs), have three major requirements; a coherent construct theory, a specification equation, and the application of an appropriate response model. The theory guides the selection of content for the questionnaire and the specification equation links the construct theory to scores produced with the instrument. For the specification equation to perform this role, the patient-reported outcome measure (PROM) must employ a response model that generates values for its individual items. The most commonly applied response model in PROM development is the Rasch model. To date this level of measurement sophistication has not been achieved in PRO measurement. Consequently, it is not possible to establish a PROM’s true construct validity. However, the development of the Lexile Framework for Reading has demonstrated that such objective measurement is possible for latent variables. This article argues that higher quality PROM development is needed if meaningful and valid PRO measurement is to be achieved. It describes the current state of PROM development, shows that published reviews of PROMs adopt inappropriate criteria for judging their quality, and illustrates how the use of traditional PROMs can lead to incorrect (and possibly dangerous) conclusions being drawn about the efficacy of interventions.  相似文献   

2.
Abstract

Composite measures that combine different types of indicators are widely used in medical research; to evaluate health systems, as outcomes in clinical trials and patient-reported outcome measurement. The potential advantages of such indices are clear. They are used to summarise complex data and to overcome the problem of evaluating new interventions when the most important outcome is rare or likely to occur far in the future. However, many scientists question the value of composite measures, primarily due to inadequate development methodology, lack of transparency or the likelihood of producing misleading results. It is argued that the real problems with composite measurement are related to their failure to take account of measurement theory and the absence of coherent theoretical models that justify the addition of the individual indicators that are combined into the composite index. All outcome measures must be unidimensional if they are to provide meaningful data. They should also have dimensional homogeneity. Ideally, a specification equation should be developed that can predict accurately how organisations or individuals will score on an index, based on their scores on the individual indicators that make up the measure. The article concludes that composite measures should not be used as they fail to apply measurement theory and, consequently, produce invalid and misleading scores.  相似文献   

3.
Abstract

The paper reconstructs the history of the experimental attempts to measure the cardinal utility of money between 1950 and 1985 within the framework provided by expected utility theory (EUT). It is shown that this history displays a definite trajectory: from the confidence in EUT and the EUT-based measurement of utility of the 1950s to the scepticism that, from the mid-1970s, haunted the validity of EUT as well as the significance of the utility measures obtained through it. By exploring the diverse aspects and causes of this trajectory, the paper covers new ground in the history of both decision theory and utility measurement.  相似文献   

4.
Background Efficient use of health resources requires accurate outcome assessment. Disease-specific patient-reported outcome (PRO) measures are designed to be highly relevant to patients with a specific disease. They have advantages over generic PROs that lack relevance to patient groups and miss crucial impacts of illness. It is thought that disease-specific measurement cannot be used in comparative effectiveness research (CER). The present study provides further evidence of the value of disease-specific measures in making valid comparisons across diseases.

Methods The Asthma Life Impact Scale (ALIS, 22 items), Living with Chronic Obstructive Pulmonary Disease (LCOPD, 22 items) scale, and Cambridge Pulmonary Hypertension Outcome Review (CAMPHOR, 25 items) were completed by 140, 162, and 91 patients, respectively. The three samples were analyzed for fit to the Rasch model, then combined into a scale consisting of 58 unique items and re-analyzed. Raw scores on the three measures were co-calibrated and a transformation table produced.

Results The scales fit the Rasch model individually (ALIS Chi2 probability value (p-Chi2)?=?0.05; LCOPD p-Chi2?=?0.38; CAMPHOR p-Chi2?=?0.92). The combined data also fit the Rasch model (p-Chi2?=?0.22). There was no differential item functioning related to age, gender, or disease. The co-calibrated scales successfully distinguished between perceived severity groups (p?<?0.001).

Limitations The samples were drawn from different sources. For scales to be co-calibrated using a common item design, they must be based on the same theoretical construct, be unidimensional, and have overlapping items.

Conclusions The results showed that it is possible to co-calibrate scores from disease-specific PRO measures. This will permit more accurate and sensitive outcome measurement to be incorporated into CER. The co-calibration of needs-based disease-specific measures allows the calculation of γ scores that can be used to compare directly the impact of any type of interventions on any diseases included in the co-calibration.  相似文献   

5.
To avoid information loss or measurement error in traditional methods dealing with mixed frequency data, we develop a novel mixed data sampling expectile regression (MIDAS-ER) model to measure financial risk. We construct the MIDAS-ER model by introducing a MIDAS structure into expectile regressions. This enables us to perform an expectile regression on raw mixed frequency data directly. We apply the proposed MIDAS-ER model to estimate two popular financial risk measures, namely, Value at Risk and Expected Shortfall, with both simulated data and four stock indices, and compare the model's performance with those of several popular models. The outstanding performance of our model demonstrates that high-frequency information helps to improve the accuracy of risk measurement. In addition, the numerical results also imply that our model can be a significant tool for risk-averse investors to control risk losses and for financial institutions to implement robust risk management.  相似文献   

6.
Abstract

Patient-reported outcome (PRO) instruments are related to risk management programmes in that they are tools to measure the benefits and risks of exposure to pharmaceutical products from the patient's perspective. Clinical measures of improvement of certain conditions may not necessarily correlate with improvements in a patient's ability to perform daily activities. PRO data, when properly administered, collected, analysed and returned to physicians are a very useful source of information. This will ultimately address safety concerns, facilitate the physician–patient relationship and improve patients' compliance to treatment in routine patient care. In this article we stress the importance of PRO in risk management.  相似文献   

7.
ABSTRACT

Clear and well-defined patent rights can incentivize innovation by granting monopoly rights to the inventor for a limited period of time in exchange for public disclosure of the invention. However, with cumulative innovation, when a product draws from intellectual property held across multiple firms (including fragmented intellectual property or patent thickets), contracting failures may lead to suboptimal economic outcomes. However, an alternative theory, developed by a variety of scholars, contends that patent thickets have a more ambiguous effect. Researchers have developed several measures to gauge the extent and impact of cumulative innovation and the various channels of patent thickets. This paper contends that mis-measurement may contribute to the incoherence and overall lack of consensus within the patent thickets literature. Specifically, the literature is missing a precise measure of vertically overlapping claims. We propose a new measure of vertically overlapping claims that incorporates invention similarity to more precisely identify inventive overlap. The measure defined in this paper will enable more accurate measurement, and allow for novel economic research on cumulative innovation, fragmentation in intellectual property, and patent thickets within and across all patent jurisdictions.  相似文献   

8.
Abstract

No doubt, the global economic (and political) structure is very unequal. The paper begins by demonstrating the various dimensions of this inequality as they relate to economic measures such as per capita GDP, degree of consumption and ownership, health measures, education, and power and influence in various global organizations such as the United Nations (UN), World Trade Organization (WTO), International Monetary Fund (IMF), and others. Next, the paper, supporting a more equal global economic and political structure, investigates the various instruments in welfare economies and ethics theory that can be utilized to justify a sort of distributional change that could lead to more global equality. Finding various economic and ethical instruments associated with utilitarianism, Pareto Optimality and the Hicks–Kaldor compensation test less than satisfactory in dealing with and advocating sufficient global distributional changes, we will investigate ethical principles developed by John Rawls in both his 1971 The Theory of Justice and his 1999 The Law of Peoples, Sen's capability approach, the debate between Rawls and Sen regarding their ethical principles, and whether or not those ethical principles can justify necessary global distributional changes. As we will argue, although the principles developed by Sen and Rawls can be utilized to justify global distributional changes to a degree, they cannot advocate a global difference principle that can justify sufficient global distributional changes. Attempt is made to develop a global difference principle that can justify and advocate more drastic distributional changes.  相似文献   

9.
Researchers often use the discrepancy between self-reported and biochemically assessed active smoking status to argue that self-reported smoking status is not reliable, ignoring the limitations of biochemically assessed measures and treating it as the gold standard in their comparisons. Here, we employ econometric techniques to compare the accuracy of self-reported and biochemically assessed current tobacco use, taking into account measurement errors with both methods. Our approach allows estimating and comparing the sensitivity and specificity of each measure without directly observing true smoking status. The results, robust to several alternative specifications, suggest that there is no clear reason to think that one measure dominates the other in accuracy.  相似文献   

10.
This study presents a new measure of financial development that is directly derived from theory. Our measure, the Marginal Utilization of Debt (hereafter, MUD) comes from the seminal work of Myers (1984), Myers and Majluf (1984) and Shyam-Sunder and Myers (1999). Further, it is directly related to the development facts of Gurley and Shaw (1955). MUD is a global measure that reflects conditions in both debt and equity markets. It varies enormously across nations; from 0.23 in Australia at one extreme to 0.96 in Turkey at the other. Cross‐country variations in MUD are not random; they are related to special‐purpose measures of debt and equity market advancement from the financial development literature. Richer, more advanced nations have smaller average MUDs. We argue that the MUD may be useful for a variety of purposes and provide three example applications.  相似文献   

11.

This paper estimates price indexes for laptop personal computers using hedonic methods and data taken from PC Magazine technical reviews. We use benchmark test results to construct a measure of system performance that encapsulates factors that have previously gone unmeasured, such as the interactions between hardware components. The resulting hedonic function is parsimonious yet has good explanatory power. A second approach to performance measurement is developed using a set of technical proxies that are shown to closely approximate the benchmark test scores, and are thus nearly perfectly equivalent in terms of resulting price index estimates. While not as parsimonious as a single performance measure, these proxies have the advantage of not requiring direct performance testing, and could thus be applied to larger data sets. Laptops were found to have declined in quality-adjusted price at an average rate of 40% per year for the period 1990-1998.  相似文献   

12.
This paper examines and measures innovation in the context of biotechnology firms by analysing the link between R&D, innovation performance and organisational growth. We conceptualise innovation performance as a latent construct with two dimensions: innovation efficacy and innovation efficiency. We use structural equations modelling to test the hypotheses on a data set from the biotechnology industry. Results support our innovation performance conceptualisation which is found to be especially useful to measure innovation in industries with long product development cycles. Findings also underline the importance of R&D knowledge creation for biotechnology firms.  相似文献   

13.
This study presents a theoretical model and assessment tool that measures individual differences in risk-aversion in financial matters. Unlike other measures of financial risk-taking, this measure assumes no prior technical knowledge of finance. The assessment tool was developed using item response theory as well as classical test theory methods. The measure is tested for predictive validity through various procedures and proves to have those properties. In addition the measure is tested for construct validity using structural equation modeling and allows for the successful classification of individuals in one of four classifications: Non-Investor, Risk Managing Investor, Conservative Investor, and Speculator. We discuss potential applications of this measure.  相似文献   

14.
ABSTRACT

Theoretical approaches to European integration often downplay and sometimes ignore the role of external actors. But the regime complex through which the euro crisis of 2010-2015 was prosecuted involved the United states directly and indirectly through the IMF. Tracing such external involvement shows that, although they preferred greater deepening of euro area institutions than was achieved, U.S. and IMF officials nonetheless contributed substantially to the creation of the EFSF/ESM, robust ECB action and launch of the banking union project. The conclusion formulates falsifiable expectations on which a theory of external influence in regional integration can be developed and tested.  相似文献   

15.
Several reasons have been put forward to explain the high dispersion of productivity across establishments: quality of management, different input usage and market distortions, to name but a few. Although it is acknowledged that a sizable portion of productivity dispersion may also be due to measurement error, little research has been devoted to identifying how much they contribute. We outline a novel procedure for identifying the role of measurement error in explaining the empirical dispersion of productivity across establishments. The starting point of our framework is the errors-in-variable model consisting of a measurement equation and a structural equation for latent productivity. We estimate the variance of the measurement error and subsequently estimate the variance of the latent productivity variable, which is not contaminated by measurement error. Using Norwegian data on the manufacture of food products, we find that about one percent of the measured dispersion stems from measurement error.  相似文献   

16.
Irreversible investment and Knightian uncertainty   总被引:1,自引:0,他引:1  
When firms make a decision about irreversible investment, they may not have complete confidence about their perceived probability measure describing future uncertainty. They may think other probability measures perturbed from the original one are also possible. Such uncertainty, characterized by not a single probability measure but a set of probability measures, is called “Knightian uncertainty.” The effect of Knightian uncertainty on the value of irreversible investment opportunity is shown to be drastically different from that of traditional uncertainty in the form of risk. Specifically, an increase in Knightian uncertainty decreases the value of investment opportunity while an increase in risk increases it.  相似文献   

17.
Abstract

The authors investigate the role of mutual fund flows in incorporating market sentiment into asset prices. They show that retail investors adjust their investments among mutual fund categories in response to changes in market sentiment. Consistent with sentiment-induced price pressure through fund flows, they further find that firms favored by mutual funds, such as large-cap, dividend payers, and firms with high institutional ownership are sensitive to market sentiment. The authors construct a pricing factor representing sentiment risk and find that the sentiment factor is significant in standard asset pricing models and robust to various sorting procedure.  相似文献   

18.
The model emphasizes the financial part of the economy and the channels through which the central bank and the government can affect it. The model combines a complete flow of fund matrix with an income–expenditure scheme in a common framework. The consistency of the flow of funds matrix is achieved through residual determination of one asset/liability from each financial balance identity. The model describes the Swedish credit market after the abolition of credit market regulation. Thus the policy instruments included comprise – among others – the interest rate scale, the cash reserve requirement, the exchange rate, government consumption and differential tax rates but no direct regulation of bank advances or investment in government securities. The model mechanisms are illustrated with policy simulations. Those display, in some instances, processes which after some periods tend to reverse the intended effects of the original policy measure. They therefore point to the need for a strategy which involves a sequential use of several policy instruments.  相似文献   

19.
ABSTRACT

Performance-based research evaluations have been adopted in several countries both to measure research quality in higher education institutions and as a basis for the allocation of funding across institutions. Much attention has been given to evaluating whether such schemes have increased the quality and quantity of research. This paper examines whether the introduction of the New Zealand Performance-Based Research Fund process produced convergence or divergence in measured research quality across universities and disciplines between the 2003 and 2012 assessments. Two convergence measures are obtained. One, referred to as β-convergence, relates to the relationship between changes in average quality and the initial quality level. The second concept, referred to as σ-convergence, relates to the changes in the dispersion in average research quality over time. Average quality scores by discipline and university were obtained from individual researcher data, revealing substantial β- and σ-convergence in research quality over the period. The hypothesis of uniform rates of convergence across almost all universities and disciplines is supported. The results provide insights into the incentives created by performance-based funding schemes.  相似文献   

20.
Adjusting national income for depletion is important in order to send correct signals to policy makers. This article reviews a number of depletion measures that have been recently brought forward in the context of environmental accounting (‘practice’) and green accounting (‘theory’): depletion as change in total wealth; depletion as ‘using up’ of the resource; depletion as net savings; or, depletion as net investment. The differences in assumptions between these measures are clarified by contrasting their approaches with the classic theory of a firm engaged in extraction. All measures are evaluated using a time series of data on Dutch natural gas reserves. Our main findings are that correcting for the cost of depletion would lead to significant adjustments of both level and growth rates of Dutch net national income, with a strong dependency on the chosen measure.We counter criticism that accounting in practice would necessarily underestimate depletion. The choice for a depletion measure should be determined by the context of use: measurement of social welfare or sustainable income. The physical measure put forward in the SEEA Central Framework can be justified by its consistency with the income concept that underlies the SNA.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号