首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The organizational literature is increasingly interested in the origins and consequences of category emergence. We examine the effects of being affiliated with categories initially considered illegitimate (‘divergence’), and of organizational attempts to blur the boundaries between categories (‘straddling’), on capital market reactions to firm announcements. We develop arguments for how these effects likely vary with increasing legitimation (‘currency’) of the category. We apply event study methodology to the complete population of firms' announcements of open source activities, an open innovation model for software development that is novel and defies the extant dominant logic of software production and valorization. Over a ten‐year period, we find negative effects of divergence, positive effects of straddling, and that the magnitude of both these effects diminishes with increasing category currency. The implications for theories of organization and open innovation in the context of category emergence are discussed.  相似文献   

2.
We develop a nuanced understanding of what drives producers’ and audiences’ categorization activities throughout market category development. Prior research on market categories assumes prototypical similarity to be the main or even only driver of categorization. Drawing on a comparative, longitudinal case study of the market categories ‘functional foods’ and ‘nanotechnology’ in Finland, we find that evolving perceptions, knowledge, and goals also impact categorization. Furthermore, our analysis uncovers that goal‐based categorization is characteristic for vital market categories, and the lack thereof may mark a waning interest and category decline. Overall, while previous research stresses the role of clear boundaries and knowledge bases for a viable category, we find that overly strict boundaries may constrain category vitality and renewal.  相似文献   

3.
This paper examines how different environmental policy types differentially impact firms and why firms vary in their responses to such policies. Based on the mechanisms embedded in policy instruments to create incentives for firms to comply, the characteristics of benefits/costs that policies impose on firms and the institutional context in which policy instruments were created and are sustained, the paper identifies five policy categories. These are category I (command and control), category II (market based), category III (mandatory information disclosures), category IV (business–government partnerships) and category V (private voluntary codes). Different policy types often bestow asymmetrical benefits/costs on firms. Some benefits/costs may constitute ‘private/club goods’ while others may constitute ‘public goods’. Drawing insights from public policy literature, the paper argues that firms can be expected to favor policies whose benefits have the characteristics of private/club goods but the costs of public goods. Thus, understanding the nature of benefits/costs (private/club versus public) and the magnitude of their excludability is critical in explaining the variations in firms' responses. To understand how managers perceive the nature of benefits/costs (monetary as well as non‐monetary), the paper draws on theories and perspectives in the business and public policy field. In doing so, the paper examines the ‘demand’ and the ‘supply’ sides as well as the market and non‐market environments of a given policy. Thus, the paper makes a case for a multi‐theoretic approach to understand variations in managerial assessments of benefits/costs, and consequently variations in their responses to various policy types. Copyright © 2004 John Wiley & Sons, Ltd and ERP Environment.  相似文献   

4.
We advocate for more tolerance in the manner we collectively address categories and categorization in our research. Drawing on the prototype view, organizational scholars have provided a ‘disciplining’ framework to explain how category membership shapes, impacts, and limits organizational success. By stretching the existing straightjacket of scholarship on categories, we point to other useful conceptualizations of categories – i.e. the causal‐model and the goal‐based approaches of categorization – and propose that depending on situational circumstances, and beyond a disciplining exercise, categories involve a cognitive test of congruence and a goal satisfying calculus. Unsettling the current consensus about categorical imperatives and market discipline, we suggest also that audiences may tolerate more often than previously thought organizations that blend, span, and stretch categories. We derive implications for research about multi‐category membership and mediation in markets, and suggest ways in which work on the theme of categories in the strategy, entrepreneurship, and managerial cognition literatures can be enriched.  相似文献   

5.
Statistical agencies are keen to devise ways to provide research access to data while protecting confidentiality. Although methods of statistical disclosure risk assessment are now well established in the statistical science literature, the integration of these methods by agencies into a general scientific basis for their practice still proves difficult. This paper seeks to review and clarify the role of statistical science in the conceptual foundations of disclosure risk assessment in an agency’s decision making. Disclosure risk is broken down into disclosure potential, a measure of the ability to achieve true disclosure, and disclosure harm. It is argued that statistical science is most suited to assessing the former. A framework for this assessment is presented. The paper argues that the intruder’s decision making and behaviour may be separated from this framework, provided appropriate account is taken of the nature of potential intruder attacks in the definition of disclosure potential.  相似文献   

6.
This study examines how the disclosure of negative sustainability‐related incidents affects the investment‐related judgments of decision‐makers. Participants in a sequential 2 × 2 between‐subjects experiment first received a company's financial information before viewing additional sustainability information (by the company and by a non‐governmental organization (NGO); with and without negative disclosure). Results indicate that self‐reporting of negative incidents does not affect decision‐makers’ stock price estimates and investment decisions compared with judgments based on financial information only. However, third‐party disclosure of these incidents by a NGO has a negative affect on these investment‐related judgments. Furthermore, the magnitude of the NGO reporting effect depends on whether the company itself simultaneously reports these incidents. Thus, disclosing negative incidents in sustainability reporting could lose some of its apparent stigma. Instead of avoiding negative reporting altogether, managers might use it as a risk mitigation tool in their reporting strategy. The results also emphasize the power of the often‐mentioned ‘watchdog’ function of NGOs acting as stakeholder advocates. Copyright © 2013 John Wiley & Sons, Ltd and ERP Environment  相似文献   

7.
This is an expository paper. Here we propose a decision-theoretic framework for addressing aspects of the confidentiality of information problems in publicly released data. Our basic premise is that the problem needs to be conceptualized by looking at the actions of three agents: a data collector, a legitimate data user, and an intruder. Here we aim to prescribe the actions of the first agent who desires to provide useful information to the second agent, but must protect against possible misuse by the third. The first agent is under the constraint that the released data has to be public to all; this in some societies may not be the case.
A novel aspect of our paper is that all utilities—fundamental to decision making—are in terms of Shannon's information entropy. Thus what gets released is a distribution whose entropy maximizes the expected utility of the first agent. This means that the distribution that gets released will be different from that which generates the collected data. The discrepancy between the two distributions can be assessed via the Kullback–Leibler cross-entropy function. Our proposed strategy therefore boils down to the notion that it is the information content of the data, not the actual data, that gets masked. Current practice of "statistical disclosure limitation" masks the observed data via transformations or cell suppression. These transformations are guided by balancing what are known as "disclosure risks" and "data utility". The entropy indexed utility functions we propose are isomorphic to the above two entities. Thus our approach provides a formal link to that which is currently practiced in statistical disclosure limitation.  相似文献   

8.
A basic concern in statistical disclosure limitation is the re-identification of individuals in anonymised microdata. Linking against a second dataset that contains identifying information can result in a breach of confidentiality. Almost all linkage approaches are based on comparing the values of variables that are common to both datasets. It is tempting to think that if datasets contain no common variables, then there can be no risk of re-identification. However, linkage has been attempted between such datasets via the extraction of structural information using ordered weighted averaging (OWA) operators. Although this approach has been shown to perform better than randomly pairing records, it is debatable whether it demonstrates a practically significant disclosure risk. This paper reviews some of the main aspects of statistical disclosure limitation. It then goes on to show that a relatively simple, supervised Bayesian approach can consistently outperform OWA linkage. Furthermore, the Bayesian approach demonstrates a significant risk of re-identification for the types of data considered in the OWA record linkage literature.  相似文献   

9.
In recent years, the determinants of voluntary disclosure have been explored in an extensive body of empirical research. One major limitation of those studies is that none has tried to find out whether voluntary disclosures were occasional or continuous over time. Yet this point is particularly important, as the voluntary disclosure mechanism can only be fully effective if the manager consistently reports the same items. This paper examines the factors associated with the decision to stop disclosing an item of information previously published voluntarily (henceforth ‘information withholding’ or IW). To measure information withholding, we code 178 annual reports of French firms for three consecutive years. Although disclosure scores are relatively stable over time, we find that this does not mean there is no change in voluntary disclosure across the years. We document that IW is a widespread practice: on average, one voluntary item out of seven disclosed in a given year is withheld the following year. We show that information withholding is mainly related to the firm's competition environment, ownership diffusion, board independence and the existence of a dual leadership structure (separate CEO and chairman).  相似文献   

10.
Vast amounts of data that could be used in the development and evaluation of policy for the benefit of society are collected by statistical agencies. It is therefore no surprise that there is very strong demand from analysts, within business, government, universities and other organisations, to access such data. When allowing access to micro‐data, a statistical agency is obliged, often legally, to ensure that it is unlikely to result in the disclosure of information about a particular person or organisation. Managing the risk of disclosure is referred to as statistical disclosure control (SDC). This paper describes an approach to SDC for output from analysis using generalised linear models, including estimates of regression parameters and their variances, diagnostic statistics and plots. The Australian Bureau of Statistics has implemented the approach in a remote analysis system, which returns analysis output from remotely submitted queries. A framework for measuring disclosure risk associated with a remote server is proposed. The disclosure risk and utility of approach are measured in two real‐life case studies and in simulation.  相似文献   

11.
This paper provides a review of common statistical disclosure control (SDC) methods implemented at statistical agencies for standard tabular outputs containing whole population counts from a census (either enumerated or based on a register). These methods include record swapping on the microdata prior to its tabulation and rounding of entries in the tables after they are produced. The approach for assessing SDC methods is based on a disclosure risk–data utility framework and the need to find a balance between managing disclosure risk while maximizing the amount of information that can be released to users and ensuring high quality outputs. To carry out the analysis, quantitative measures of disclosure risk and data utility are defined and methods compared. Conclusions from the analysis show that record swapping as a sole SDC method leaves high probabilities of disclosure risk. Targeted record swapping lowers the disclosure risk, but there is more distortion of distributions. Small cell adjustments (rounding) give protection to census tables by eliminating small cells but only one set of variables and geographies can be disseminated in order to avoid disclosure by differencing nested tables. Full random rounding offers more protection against disclosure by differencing, but margins are typically rounded separately from the internal cells and tables are not additive. Rounding procedures protect against the perception of disclosure risk compared to record swapping since no small cells appear in the tables. Combining rounding with record swapping raises the level of protection but increases the loss of utility to census tabular outputs. For some statistical analysis, the combination of record swapping and rounding balances to some degree opposing effects that the methods have on the utility of the tables.  相似文献   

12.
This paper explores the concept of ‘causality’ in the social and behavioural sciences, particularly as deployed in economics and econometrics (or statistical testing in general). Causality is closely associated with empirical testing of theories and hypotheses in an empiricist manner. In this way it is fundamentally connected to our notions of scientific endeavour. But causality is also a rhetorical category in these areas, as well as in everyday debate about social and economic matters. The question raised is ‘Why has causality come to occupy such a central place in our intellectual culture? What are the conditions of existence of a “causal culture” in both the domain of scientific endeavour and popular disputation?’ The argument is that there are a wide range of conditions that support this ‘causal culture’ not least of which is an ethical imperative associated with a felt need to attribute blame for mishaps, accidents, conditions of life, and so on. Thus an ethic of morality is strongly involved. The rhetorical operation of causality is linked to this ethic and, in addition, to a range of other non-ethical conditions for the ‘causality culture’ within economics and related disciplines.  相似文献   

13.
Risk‐utility formulations for problems of statistical disclosure limitation are now common. We argue that these approaches are powerful guides to official statistics agencies in regard to how to think about disclosure limitation problems, but that they fall short in essential ways from providing a sound basis for acting upon the problems. We illustrate this position in three specific contexts—transparency, tabular data and survey weights, with shorter consideration of two key emerging issues—longitudinal data and the use of administrative data to augment surveys.  相似文献   

14.
Grade averaging (by arithmetic mean) is often performed as an attempt to assess overall student performance. In the case of grade comparison originating in non-equivalent scales, rank errors and absurd averaging may result. As averages are sometimes used for candidate selection, the paper dicusses how decisions based on arithmetic mean interpretation may be true, false, or fuzzy, according to different categories of participating candidates. A two stage selection process is analyzed from the perspective of candidate categories. The impact of the choice of asessment scale on the decision-making process is also evaluated and statistical biases are identified. The relevance of using a uniformity criterion is demonstrated.  相似文献   

15.
This essay offers a reflexive return to two research projects to demonstrate the value of Bourdieu's emphasis on the symbolic for the analysis of contemporary urban transformation. Bourdieu's insistence that we track the social genesis and diffusion of spatial categories of thought and action directs us to the empirical study of the struggles between agents and organizations that promote and/or oppose these categories, as well as the political, economic and other interests animating the agents. A retracing of the parallel invention of the ‘at‐risk neighborhood’ (quartier sensible) coined for and targeted by French urban policy since the late 1980s and the emergence of ‘historic’ or ‘diverse’ neighborhoods touted by gentrifying residents, cultural organizations and real estate agents in the United States since the 1960s challenges misleading oppositions between materiality and representations that often underpin and cramp urban research.  相似文献   

16.
This article explores the contours of modernization in the unmaking and remaking of homes among evicted and resettled families in highrise housing. We examine the trajectories of forced eviction by drawing upon interviews with 17 individuals from nine evicted families who have transitioned from living in informal settlements to highrise social housing (rusunawa) in Jakarta. Drawing on two strands of literature—‘developmental idealism and the family’ from population studies and the critical geographies of ‘homemaking’—we argue that the demolition of houses is but an initial event in a long, quiet and subtle, yet profoundly defining, process of ‘upgrading’ families as part of ‘improving’ society, according to developmental logic. The disciplining of the urban poor does not end with the demolition of their houses, but rather continues as part of the fulfilment of shelter. This article attends to the slow unravelling of home hidden and embedded in post-eviction everyday lives, which are often overlooked because of the overt and violent brutality of forced eviction. While eviction can be seen as the violent visual expression of developmentalism, we argue that the relocation in rusunawa is where this ideal permeates into daily domestic life, making mundane activities a battleground for different ideals of ‘home’.  相似文献   

17.
This paper argues that the theoretical framework conventionally used by regional economists when analysing labour markets does not allow the spatial dimension of these markets to be discussed in a wholly satisfactory way. Search theory, which has been little used in regional economics, offers a means of overcoming this limitation. A search theory of labour markets is developed which includes as an exogenous variable the cost of travelling between home and work. The theory is shown to imply an inverse relationship between ‘accessibility to jobs’ and ‘duration of unemployment’.  相似文献   

18.
This paper examines the determinants of price adjustment decisions by supermarkets to increase or decrease prices for 11 different food categories and evaluates the characteristics of these firms that influence these decisions. We use a unique dataset to analyze firm variables and industry variables and their impact on price adjustment in supermarket stores. The study contributes to the price adjustment literature by identifying determinants of price behavior by stores and product category. We find that the rationale for increasing prices differs from that for decreasing prices, retailers make different adjustment decisions based on product category, and market‐level controls have little impact. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

19.
Serious concerns have been raised that false positive findings are widespread in empirical research in business disciplines. This is largely because researchers almost exclusively adopt the ‘p‐value less than 0.05’ criterion for statistical significance; and they are often not fully aware of large‐sample biases which can potentially mislead their research outcomes. This paper proposes that a statistical toolbox (rather than a single hammer) be used in empirical research, which offers researchers a range of statistical instruments, including a range of alternatives to the p‐value criterion such as the Bayesian methods, optimal significance level, sample size selection, equivalence testing and exploratory data analyses. It is found that the positive results obtained under the p‐value criterion cannot stand, when the toolbox is applied to three notable studies in finance.  相似文献   

20.
In data integration contexts, two statistical agencies seek to merge their separate databases into one file. The agencies also may seek to disseminate data to the public based on the integrated file. These goals may be complicated by the agencies' need to protect the confidentiality of database subjects, which could be at risk during the integration or dissemination stage. This article proposes several approaches based on multiple imputation for disclosure limitation, usually called synthetic data, that could be used to facilitate data integration and dissemination while protecting data confidentiality. It reviews existing methods for obtaining inferences from synthetic data and points out where new methods are needed to implement the data integration proposals.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号