共查询到20条相似文献,搜索用时 15 毫秒
1.
Empirical analysis of individual response behavior is sometimes limited due to the lack of explanatory variables at the individual level. In this paper we put forward a new approach to estimate the effects of covariates on individual response, where the covariates are unknown at the individual level but observed at some aggregated level. This situation may, for example, occur when the response variable is available at the household level but covariates only at the zip-code level. 相似文献
2.
The price of a product depends on its characteristics and will vary in dynamic markets. The model describes a processing firm that bids in an auction for a heterogeneous and perishable input. The reduced form of this model is estimated as an expanded random parameter model that combines a nonlinear hedonic bid function and inverse input demand functions for characteristics. The model was estimated by using 289,405 transactions from the Icelandic fish auctions. Total catch and gut ratio were the main determinants of marginal prices of characteristics, while the price of cod mainly depended on size, gutting and storage. 相似文献
3.
We describe procedures for Bayesian estimation and testing in cross-sectional, panel data and nonlinear smooth coefficient models. The smooth coefficient model is a generalization of the partially linear or additive model wherein coefficients on linear explanatory variables are treated as unknown functions of an observable covariate. In the approach we describe, points on the regression lines are regarded as unknown parameters and priors are placed on differences between adjacent points to introduce the potential for smoothing the curves. The algorithms we describe are quite simple to implement—for example, estimation, testing and smoothing parameter selection can be carried out analytically in the cross-sectional smooth coefficient model. 相似文献
4.
We introduce a new class of models that has both stochastic volatility and moving average errors, where the conditional mean has a state space representation. Having a moving average component, however, means that the errors in the measurement equation are no longer serially independent, and estimation becomes more difficult. We develop a posterior simulator that builds upon recent advances in precision-based algorithms for estimating these new models. In an empirical application involving US inflation we find that these moving average stochastic volatility models provide better in-sample fitness and out-of-sample forecast performance than the standard variants with only stochastic volatility. 相似文献
5.
We examine the econometric implications of the decision problem faced by a profit/utility-maximizing lender operating in a simple “double-binary” environment, where the two actions available are “approve” or “reject”, and the two states of the world are “pay back” or “default”. In practice, such decisions are often made by applying a fixed cutoff to the maximum likelihood estimate of a parametric model of the default probability. Following (Elliott and Lieli, 2007), we argue that this practice might contradict the lender’s economic objective and, using German loan data, we illustrate the use of “context-specific” cutoffs and an estimation method derived directly from the lender’s problem. We also provide a brief discussion of how to incorporate legal constraints, such as the prohibition of disparate treatment of potential borrowers, into the lender’s problem. 相似文献
6.
The minimum discrimination information principle is used to identify an appropriate parametric family of probability distributions and the corresponding maximum likelihood estimators for binary response models. Estimators in the family subsume the conventional logit model and form the basis for a set of parametric estimation alternatives with the usual asymptotic properties. Sampling experiments are used to assess finite sample performance. 相似文献
7.
This paper presents a convenient shortcut method for implementing the Heckman estimator of the dynamic random effects probit model and other dynamic nonlinear panel data models using standard software. It then compares the estimators proposed by Heckman, Orme and Wooldridge, based on three alternative approximations, first in an empirical model for the probability of unemployment and then in a set of simulation experiments. The results indicate that none of the three estimators dominates the other two in all cases. In most cases, all three estimators display satisfactory performance, except when the number of time periods is very small. 相似文献
8.
This paper examines the usefulness of a more refined business cycle classification for monthly industrial production (IP), beyond the usual distinction between expansions and contractions. Univariate Markov-switching models show that a three regime model is more appropriate than a model with only two regimes. Interestingly, the third regime captures ‘severe recessions’, contrasting the conventional view that the additional third regime represents a ‘recovery’ phase. This is confirmed by means of Markov-switching vector autoregressive models that allow for phase shifts between the cyclical regimes of IP and the Conference Board's Leading Economic Index (LEI). The timing of the severe recession regime mostly corresponds with periods of substantial financial market distress and severe credit squeezes, providing empirical evidence for the ‘financial accelerator’ theory. 相似文献
9.
A regression discontinuity (RD) research design is appropriate for program evaluation problems in which treatment status (or the probability of treatment) depends on whether an observed covariate exceeds a fixed threshold. In many applications the treatment-determining covariate is discrete. This makes it impossible to compare outcomes for observations “just above” and “just below” the treatment threshold, and requires the researcher to choose a functional form for the relationship between the treatment variable and the outcomes of interest. We propose a simple econometric procedure to account for uncertainty in the choice of functional form for RD designs with discrete support. In particular, we model deviations of the true regression function from a given approximating function—the specification errors—as random. Conventional standard errors ignore the group structure induced by specification errors and tend to overstate the precision of the estimated program impacts. The proposed inference procedure that allows for specification error also has a natural interpretation within a Bayesian framework. 相似文献
10.
This paper compares price stickiness on the Internet and in traditional brick‐and‐mortar stores and examines differences across five countries: France, Germany, Italy, the UK and the US. Contrary to conventional retail prices, we find that Internet prices change less often in the US than in EU countries. However, this does not hold for all product categories. Second, prices on the Internet are not necessarily more flexible than prices in brick‐and‐mortar stores. Third, our dataset reveals substantial heterogeneity in the frequency of price changes across Internet shops. Finally, panel logit estimates suggest that the likelihood of observing a price change is a function of both state‐dependent and time‐dependent factors. 相似文献
11.
随着现代社会人们生活方式地不断改变与革新,目前越来越多的智能化生活用品出现在我们的日常生活中,针对目前大多数城市居民由于工作等其他方面的事情无法花费太多时间和精力来管理我们生活中合理晾晒与收衣服的问题,本设计详细的论述了晾衣架控制系统的设计步骤和设计产品的功能,设计完成了一种可以根据外界气候条件并不用人工收晾衣服的智能衣架,采用光敏电阻检测外界环境光线从而实现天黑自动收衣服,采用DHT11传感器检测环境湿度,下雨的时候不用麻烦人力可以自动收衣服,将信号送到控制系统核心AT89C51单片机。本次设计有两个模式:自动模式和手动模式。自动模式下系统根据所采集的参数与设定值比较进而驱动步进电机的正反转来实现收回和打开衣架的功能,手动模式下用户可手动打开和收回衣架。 相似文献
12.
《Socio》2021
The paper provides a conceptual framework for a multi-dimensional assessment of risk associated to natural disasters. The different components of risk (hazard, exposure, vulnerability and resilience) are seen in a combined natural and socio-economic perspective and are integrated into a Disaster Risk Assessment Tool (DRAT). The tool can be used to support disaster management strategies, as well as risk mitigation and adaptation strategies at very disaggregated geographical or administrative scales. In this paper, we illustrate the features of the DRAT and we apply the tool to 7556 Italian municipalities to map their multidimensional risk. DRAT can be particularly useful to identify hotspots that are characterized by high hazard, exposure and vulnerability and by low resilience. In order to identify hotspots, we perform a cluster analysis of the Italian municipalities in terms of their risk ranking based on DRAT. We also suggest how the tool may be exploited within the processes of disaster risk policy. 相似文献
13.
The concept of parameter identification (for a given specification) is differentiated from global identification (which specification is right). First-order conditions for production under risk are shown to admit many alternative specification pairs representing risk preferences and either perceived price risk, production risk, or the deterministic production structure. Imposing an arbitrary specification on any of the latter three determines which risk preference specification fits a given dataset, undermining global identification even when parameter identification is suggested by typical statistics. This lack of identification is not relaxed by increasing the number of observations. Critical implications for estimation of mean-variance specifications are derived. 相似文献
14.
This paper investigates the spurious effect in forecasting asset returns when signals from technical trading rules are used as predictors. Against economic intuition, the simulation result shows that, even if past information has no predictive power, buy or sell signals based on the difference between the short-period and long-period moving averages of past asset prices can be statistically significant when the forecast horizon is relatively long. The theoretical analysis reveals that both ‘momentum’ and ‘contrarian’ strategies can be falsely supported, while the probability of obtaining each result depends on the type of the test statistics employed. 相似文献
15.
In this paper, we introduce a new flexible mixed model for multinomial discrete choice where the key individual- and alternative-specific parameters of interest are allowed to follow an assumption-free nonparametric density specification, while other alternative-specific coefficients are assumed to be drawn from a multivariate Normal distribution, which eliminates the independence of irrelevant alternatives assumption at the individual level. A hierarchical specification of our model allows us to break down a complex data structure into a set of submodels with the desired features that are naturally assembled in the original system. We estimate the model, using a Bayesian Markov Chain Monte Carlo technique with a multivariate Dirichlet Process (DP) prior on the coefficients with nonparametrically estimated density. We employ a “latent class” sampling algorithm, which is applicable to a general class of models, including non-conjugate DP base priors. The model is applied to supermarket choices of a panel of Houston households whose shopping behavior was observed over a 24-month period in years 2004–2005. We estimate the nonparametric density of two key variables of interest: the price of a basket of goods based on scanner data, and driving distance to the supermarket based on their respective locations. Our semi-parametric approach allows us to identify a complex multi-modal preference distribution, which distinguishes between inframarginal consumers and consumers who strongly value either lower prices or shopping convenience. 相似文献
16.
In this paper, we introduce a new Poisson mixture model for count panel data where the underlying Poisson process intensity is determined endogenously by consumer latent utility maximization over a set of choice alternatives. This formulation accommodates the choice and count in a single random utility framework with desirable theoretical properties. Individual heterogeneity is introduced through a random coefficient scheme with a flexible semiparametric distribution. We deal with the analytical intractability of the resulting mixture by recasting the model as an embedding of infinite sequences of scaled moments of the mixing distribution, and newly derive their cumulant representations along with bounds on their rate of numerical convergence. We further develop an efficient recursive algorithm for fast evaluation of the model likelihood within a Bayesian Gibbs sampling scheme. We apply our model to a recent household panel of supermarket visit counts. We estimate the nonparametric density of three key variables of interest-price, driving distance, and their interaction-while controlling for a range of consumer demographic characteristics. We use this econometric framework to assess the opportunity cost of time and analyze the interaction between store choice, trip frequency, search intensity, and household and store characteristics. We also conduct a counterfactual welfare experiment and compute the compensating variation for a 10%-30% increase in Walmart prices. 相似文献
17.
传统的嵌入式系统软件设计中广泛采用单任务顺序执行机制,应用的设计变更和功能扩展往往要带来大量的改动,导致系统频繁更新以至无法达到设计目标.MCS51系列单片机是我国目前使用最多的单片机系列之一,在相对复杂的C51应用中导入uCOS-Ⅱ,可使系统的可靠性和稳定性得到较大的提高. 相似文献
18.
19.
Unpredictability arises from intrinsic stochastic variation, unexpected instances of outliers, and unanticipated extrinsic shifts of distributions. We analyze their properties, relationships, and different effects on the three arenas in the title, which suggests considering three associated information sets. The implications of unanticipated shifts for forecasting, economic analyses of efficient markets, conditional expectations, and inter-temporal derivations are described. The potential success of general-to-specific model selection in tackling location shifts by impulse-indicator saturation is contrasted with the major difficulties confronting forecasting. 相似文献
20.
Factors estimated from large macroeconomic panels are being used in an increasing number of applications. However, little is known about how the size and the composition of the data affect the factor estimates. In this paper, we question whether it is possible to use more series to extract the factors, and yet the resulting factors are less useful for forecasting, and the answer is yes. Such a problem tends to arise when the idiosyncratic errors are cross-correlated. It can also arise if forecasting power is provided by a factor that is dominant in a small dataset but is a dominated factor in a larger dataset. In a real time forecasting exercise, we find that factors extracted from as few as 40 pre-screened series often yield satisfactory or even better results than using all 147 series. Weighting the data by their properties when constructing the factors also lead to improved forecasts. Our simulation analysis is unique in that special attention is paid to cross-correlated idiosyncratic errors, and we also allow the factors to have stronger loadings on some groups of series than others. It thus allows us to better understand the properties of the principal components estimator in empirical applications. 相似文献