首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In emergency response volunteer programs, volunteers in the vicinity of an emergency are alerted via their mobile phones to the scene of the event to perform a specific task. Tasks are usually assigned based on predetermined rules disregarding real-world uncertainties. In this paper, we consider some of these uncertainties and propose an optimization model for the dispatch of volunteers to emergencies, where all task assignments must be done before dispatch. This means that each volunteer must be given a task before knowing whether (s)he is available. The model becomes computationally demanding for large problem instances; therefore, we develop a simple greedy heuristic for the problem and ensure that it can produce high quality solutions by comparing it to the exact model. While the model is for a general emergency, we test it for the case of volunteers responding to out-of-hospital cardiac arrest (OHCA) incidents. We compare the results of the model to the dispatch strategies used in two ongoing volunteer programs in Sweden and in the Netherlands and use simulation to validate the results. The results show that the model most often outperforms the currently used strategies; however, the computational run times, even for the heuristic, are too high to be operationally useful for large problem instances. Thus, it should be possible to improve the outcome using optimization-based task assignments strategies, but a fast solution method is needed for such strategies to be practically useable.  相似文献   

2.
The problem of scheduling n independent jobs on a single processor to minimize the total tardiness of the assignment has attracted much attention. Solution algorithms, both exact and approximate, have been reported, but no polynomial time exact algorithm has yet been found, nor has the problem been proven NP-complete.In this paper we consider the more general case of scheduling n independent jobs on m unequal processors to minimize total tardiness. Since this problem is more complex than the corresponding single-processor problem, no polynomial-time algorithm is in sight. For problems of this nature, approximate algorithms may be more valuable than exact algorithms in terms of applications. A heuristic algorithm is developed to solve the multiple-processor problem. Computational experiments show that the heuristic algorithm is efficient, fast, and easy to implement.  相似文献   

3.
This paper models for the first time a spatial process in local tax policies in the presence of centrally imposed fiscal limitations. Focusing on the frequently encountered case of a tax rate cap, we evaluate three empirical approaches to the analysis of spatially dependent limited tax policies: (i) a Bayesian spatial approach for censored dependent variables; (ii) a Tobit corner solution model augmented with a spatial lag; (iii) a spatial discrete hazard model. The evidence arising from an investigation of severely state‐constrained local vehicle taxes in Italy suggests that ignoring tax limitations can lead to substantial underestimation of inter‐jurisdictional fiscal interaction. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

4.
We show that exact computation of a family of ‘max weighted score’ estimators, including Manski’s max score estimator, can be achieved efficiently by reformulating them as mixed integer programs (MIP) with disjunctive constraints. The advantage of our MIP formulation is that estimates are exact and can be computed using widely available solvers in reasonable time. In a classic work-trip mode choice application, our method delivers exact estimates that lead to a different economic interpretation of the data than previous heuristic estimates. In a small Monte Carlo study we find that our approach is computationally efficient for usual estimation problem sizes.  相似文献   

5.
Combining ethnographic inquiries with questionnaires, this article rectifies the dearth of systematic research on core employees in Turkey's shipyards. In doing so, it revises conventional associations of precarity with the peripheral jobs both exclusively and predominantly. In particular, we point to the rise of a peculiar model, ‘paradoxical precarity’, as the core jobs have become more identifiable with precarity than the rest. Paradoxical precarity has four distinguishable contours: (i) The masses of core employees lost their jobs to precarious workers. (ii) Even so, a substantial proportion of employees remain at the core. (iii) This, however, came at a cost: they became more dissatisfied than others with remuneration, job security, employee involvement and job intensity whilst frustrated with unions and (iv) paradoxical precarity has faced political and economic challenges but it is reproduced by a managerial short‐termism under competitive pressures to save on high skills thanks to an ever‐increasing number of graduates.  相似文献   

6.
The paper compares the pseudo real‐time forecasting performance of three dynamic factor models: (i) the standard principal component model introduced by Stock and Watson in 2002; (ii) the model based on generalized principal components, introduced by Forni, Hallin, Lippi, and Reichlin in 2005; (iii) the model recently proposed by Forni, Hallin, Lippi, and Zaffaroni in 2015. We employ a large monthly dataset of macroeconomic and financial time series for the US economy, which includes the Great Moderation, the Great Recession and the subsequent recovery (an update of the so‐called Stock and Watson dataset). Using a rolling window for estimation and prediction, we find that model (iii) significantly outperforms models (i) and (ii) in the Great Moderation period for both industrial production and inflation, and that model (iii) is also the best method for inflation over the full sample. However, model (iii) is outperformed by models (ii) and (i) over the full sample for industrial production.  相似文献   

7.
All the macro-economic models have the nonlinearity in variables within their simultaneous equations systems. I propose a full information estimation method for such models. The method is (i) asymptotically efficient, (ii) feasible in the contemporary computer technology as it consists of calculations very much like the nonlinear multipliers, and (iii) hopefully applicable to the undersized sample case which prevails in the macro-economic model building. Though two other methods are also investigated, one is found to be asymptotically inefficient, and another turns out to be inapplicable to the undersized sample case.  相似文献   

8.
We present the Hierarchical Composition (HICOM) heuristic procedure for single machine scheduling with sequence dependent setups that minimizes the total setup time. The heuristic is a two-stage procedure that takes advantage of the natural product groupings, and can be used in a group technology environment. Computational results show that HICOM requires negligible solution time for all cases tested with various sizes. More importantly, when benchmarked against the general purpose solver CPLEX, HICOM shows advantages in both CPU time and solution quality for large size problems. Thus, HICOM is highly valuable in practice when quick and good solutions are preferred in scheduling dynamics under the just-in-time lean manufacturing environment. Furthermore, when commercial software is not available, as often is the case for small to medium manufacturers, HICOM becomes a viable option because it is easy to understand and implement.  相似文献   

9.
Cloud manufacturing (CMfg) has emerged as a new manufacturing paradigm that provides ubiquitous, on-demand manufacturing services to customers through network and CMfg platforms. In CMfg system, task scheduling as an important means of finding suitable services for specific manufacturing tasks plays a key role in enhancing the system performance. Customers’ requirements in CMfg are highly individualized, which leads to diverse manufacturing tasks in terms of execution flows and users’ preferences. We focus on diverse manufacturing tasks and aim to address their scheduling issue in CMfg. First of all, a mathematical model of task scheduling is built based on analysis of the scheduling process in CMfg. To solve this scheduling problem, we propose a scheduling method aiming for diverse tasks, which enables each service demander to obtain desired manufacturing services. The candidate service sets are generated according to subtask directed graphs. An improved genetic algorithm is applied to searching for optimal task scheduling solutions. The effectiveness of the scheduling method proposed is verified by a case study with individualized customers’ requirements. The results indicate that the proposed task scheduling method is able to achieve better performance than some usual algorithms such as simulated annealing and pattern search.  相似文献   

10.
This paper provides results on the economic decision‐making process of Spanish workers, who decide their jobs from the effects of variations in the non‐wage income, the wage and the prices of non‐pecuniary job characteristics. To that end, we formulate a non‐separable generalization of the Linear Expenditure System (NLES) as a joint model of labor supply and job characteristics demand, estimated separately for both males and females, using a 1991 Spanish survey. The main results show that: (i) some job characteristics have a positive effect on the wage, whereas others have a negative effect; (ii) the average percentage effect of employer size and the complexity index are higher for males than for females, with the fatal accident risk displaying similar values; (iii) if the non‐wage income of every worker increases, these individuals will prefer to devote less hours to work, and will also prefer jobs in smaller companies and with a lower risk; and (iv) if the wage and hedonic prices of non‐pecuniary job characteristics increase, then both males and females will prefer to reduce their labor supply, and devote their available time to jobs in bigger firms, with a higher risk and complexity. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

11.
Establishment of aggregation hubs in a supply chain network (SCN) is typically a facility location-allocation (FLA) decision, which is known to be a NP-hard optimization problem. Considering the flow of heterogeneous perishable products, like fresh produce, with different spoilage rates, further increases the complexity of such a problem. This is due to the effect of transportation time and conditions, services provided in the hub, and hub proximity to supply sources, on the quality and quantity of products eventually reaching the demand destinations, and hence on the location-allocation decision. In this paper, this problem is formulated as a mixed integer linear programming (MILP) model that considers a number of problem characteristics simultaneously for the first time, to minimize the transportation, spoilage, processing, and capacity-based hub establishment costs. Due to its complexity, two hybrid algorithms that combine a meta-heuristic with a perishability-modified transportation algorithm, are proposed to solve the problem. The algorithms are based on binary particle swarm optimization (BPSO) and simulated annealing (SA). Taguchi analysis is used to tune the significant parameters of both algorithms considering different problem sizes. Computational analysis is further conducted to evaluate and compare the performances of the algorithms using randomly generated test instances and exact solutions obtained using CPLEX. Results show that while both algorithms are capable of obtaining optimum solutions for most instances, the hybrid BPSO slightly outperforms the hybrid SA in terms of consistency and solution time.  相似文献   

12.
This paper provides a multi-stage multi-layer mapping methodology for capturing the macro-level supply chain dynamics that govern industrial systems using renewable feedstocks. The mapping approach combines the Industrial Systems Mapping and System Dynamics principles to systematically capture the interrelations across: (i) institutional players, (ii) sector specialists, (iii) products and intermediates, (iv) production operations, and (v) firms within the supply chain. The interfaces are further explored at four interconnected and mutually interacting theme areas of analysis, namely: (i) renewable chemical feedstocks, (ii) production technologies, (iii) target markets, and (iv) value and economic viability. We demonstrate the applicability of our approach by mapping the dynamics in industrial systems for the production of ‘green’ pharmaceuticals, particularly via the illustrative case of paracetamol. Through the use of the proposed integrated mapping process the case study demonstrates the principal interrelationships and inter-firm dynamics between the different layers of analysis. Three main drivers are identified that could enhance supply network transformations for improved viability of these developing industrial systems, namely: (i) regulatory conformance with market requirements, (ii) system level feasibility assessment of given renewable feedstocks, and (iii) target market volume demand. The causal feedback elements of the provided mapping technique indicate that it could support the analysis of industrial systems’ transformation dynamics enabled by renewable feedstocks. The standardisation of the methodology and its elements provides for an effective visualisation technique with cross-industry relevance.  相似文献   

13.
The range of daily asset prices is often used as a measure of volatility. Using a CARRX (conditional autoregressive range with exogenous variables) model, and the parsimony principle, the paper investigates the factors affecting the volatilities of Asian equity markets. Since the beginning of the new Century, emerging Asian markets such as Taiwan and Shanghai have been undergoing various stages of financial globalization. The volatility of the equity market may not be explained solely by its own dynamics. In this paper, we examine volatility using the following factors: (i) lagged returns; (ii) lagged absolute returns; (iii) own trading volume; (iv) U.S. factors; (v) European factors; and (vi) regional (Asian) factors. Points (i) and (iii) are by and large significant, while (ii) is not. Controlling for (i), (ii) and (iii), we find evidence that the volatility of European markets has spillovers on to both the Taiwan and Tokyo markets, mild evidence that the volatility of the U.S. market has spillovers on to the Hong Kong market, but there are no spillovers from the European or U.S. markets on to the Shanghai market.  相似文献   

14.
This is an essay on a unified approach to the identifiability problem in static models with and without hidden endogenous variables. As is well known, when some of these variables are unobserved, the prior information requirements for models when all endogenous variables are observed, are still there. In addition, extra prior information that takes the place of the means and covariances of the missing variables will have to be supplied directly or indirectly by the statistical researcher. In the paper we characterize the quality and quantity of the required information for the general linear static model and apply it when the model is i) an econometric demand and supply model with missing observations on the quantity transacted, ii) a factor analysis model with observed characteristics of the test takers and iii) a LISREL Model without fixed exogenous variables. With unknown true parameters, the exact rank conditions are seldom verifiable but we do recommend an implementable check-list that is adequate for almost all parameters.  相似文献   

15.
We propose and develop a scheduling system for a very special type of flow shop. This flow shop processes a variety of jobs that are identical from a processing point of view. All jobs have the same routing over the facilities of the shop and require the same amount of processing time at each facility. Individual jobs, though, may differ since they may have different tasks performed upon them at a particular facility. Examples of such shops are flexible machining systems and integrated circuit fabrication processes. In a flexible machining system, all jobs may have the same routing over the facilities, but the actual tasks performed may differ; for instance, a drilling operation may vary in the placement or size of the holes. Similarly, for integrated circuit manufacturing, although all jobs may follow the same routing, the jobs will be differentiated at the photolithographic operations. The photolitho-graphic process establishes patterns upon the silicon wafers where the patterns differ according to the mask that is used.The flow shop that we consider has another important feature, namely the job routing is such that a job may return one or more times to any facility. We say that when a job returns to a facility it reenters the flow at that facility, and consequently we call the shop a re-entrant flow shop. In integrated circuit manufacturing, a particular integrated circuit will return several times to the photolithographic process in order to place several layers of patterns on the wafer. Similarly, in a flexible machining system, a job may have to return to a particular station several times for additional metal-cutting operations.These re-entrant flow shops are usually operated and scheduled as general job shops, ignoring the inherent structure of the shop flow. Viewing such shops as job shops means using myopic scheduling rules to sequence jobs at each facility and usually requires large queues of work-in-process inventory in order to maintain high facility utilization, but at the expense of long throughput times.In this paper we develop a cyclic scheduling method that takes advantage of the flow character of the process. The cycle period is the inverse of the desired production rate (jobs per day). The cyclic schedule is predicated upon the requirement that during each cycle the shop should perform all of the tasks required to complete a job, although possibly on different jobs. In other words, during a cycle period we require each facility to do each task assigned to it exactly once. With this requirement, a cyclic schedule is just the sequencing and timing on each facility of all of the tasks that that facility must perform during each cycle period. This cyclic schedule is to be repeated by each facility each cycle period. The determination of the best cyclic schedule is a very difficult combinatorial optimization problem that we cannot solve optimally for actual operations. Rather, we present a computerized heuristic procedure that seems very effective at producing good schedules. We have found that the throughput time of these schedules is much less than that achievable with myopic sequencing rules as used in a job shop. We are attempting to implement the scheduling system at an integrated circuit fabrication facility.  相似文献   

16.
In the analysis of clustered and longitudinal data, which includes a covariate that varies both between and within clusters, a Hausman pretest is commonly used to decide whether subsequent inference is made using the linear random intercept model or the fixed effects model. We assess the effect of this pretest on the coverage probability and expected length of a confidence interval for the slope, conditional on the observed values of the covariate. This assessment has the advantages that it (i) relates to the values of this covariate at hand, (ii) is valid irrespective of how this covariate is generated, (iii) uses exact finite sample results, and (iv) results in an assessment that is determined by the values of this covariate and only two unknown parameters. For two real data sets, our conditional analysis shows that the confidence interval constructed after a Hausman pretest should not be used.  相似文献   

17.
Spatial Growth and Redevelopment with Perfect Foresight and Durable Housing   总被引:2,自引:0,他引:2  
In this paper, I present a theoretical model of the spatial growth of an urban area with durable housing. I combine several assumptions that typically complicate the analysis: (i) housing developers have perfect foresight; (ii) the initial development and many waves of redevelopment are considered in each developer's plan; and (iii) the closed-city assumption is made, so that the time path of population is exogenous and that of consumer utility is endogenous. I still obtain explicit solutions for the spatial pattern of urban growth, and for the timing of the initial residential development and each successive redevelopment at each distance from the urban center. I compare perfect-foresight growth to growth with static expectations, and I examine the comparative statics of both.  相似文献   

18.
We study the Maximal Covering Location Problem with Accessibility Indicators and Mobile Units that maximizes the facilities coverage, the accessibility of the zones to the open facilities, and the spatial disaggregation. The main characteristic of our problem is that mobile units can be deployed from open facilities to extend the coverage, accessibility, and opportunities for the inhabitants of the different demand zones. We formulate the Maximal Covering Location Problem with Accessibility Indicators and Mobile Units as a mixed-integer linear programming model. To solve larger instances, we propose a matheuristic (combination of exact and heuristic methods) composed of an Estimation of Distribution Algorithm and a parameterized Maximal Covering Location Problem with Accessibility Indicators and Mobile Units integer model. To test our methodology, we consider the Maximal Covering Location Problem with Accessibility Indicators and Mobile Units model to cover the low-income zones with Severe Acute Respiratory Syndrome Coronavirus 2 patients. Using official databases, we made a set of instances where we considered the poverty index, number of population, locations of hospitals, and Severe Acute Respiratory Syndrome Coronavirus 2 patients. The experimental results show the efficiency of our methodologies. Compared to the case without mobile units, we drastically improve the coverage and accessibility for the inhabitants of the demand zones.  相似文献   

19.
This paper compares numerical solutions to the model of Krusell and Smith [1998. Income and wealth heterogeneity in the macroeconomy. Journal of Political Economy 106, 867–896] generated by different algorithms. The algorithms have very similar implications for the correlations between different variables. Larger differences are observed for (i) the unconditional means and standard deviations of individual variables, (ii) the behavior of individual agents during particularly bad times, (iii) the volatility of the per capita capital stock, and (iv) the behavior of the higher-order moments of the cross-sectional distribution. For example, the two algorithms that differ the most from each other generate individual consumption series that have an average (maximum) difference of 1.63% (11.4%).  相似文献   

20.
This paper develops a mathematical framework that relies on modern social network analysis theories for treating the nurse team formation and nurse scheduling (shift assignment) problems, accounting for signed social connections. These problems lie in assigning nurses to teams/shifts such that the constraints regarding both the working regulations and nurses preferences are satisfied. Recent research indicates the dependence of nursing team performance on team social structure; however, so far, the social structure considerations have not been explicitly incorporated into the mathematical formulations of the nurse scheduling problem. The presented framework introduces models that quantitatively exploit such dependence. This paper explores instances of Nurse Team Formation Problem (NTFP) and Nurse Scheduling Problem (NSP) incorporating signed social structure with the measures based on such network structures as edges, full dyads, triplets, k-stars, balanced and unbalanced triangles, etc., in directed, signed networks. The paper presents the integer programming formulations for NTFP and NSP, and a problem-specific heuristic that performs variable-depth neighborhood search to tackle NTFP instances with signed social structures. Computational results for a real-world problem instance with 20 nurses are reported. The insights obtained from the presented framework and future research directions are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号