首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Both practitioners and researchers in the field of Operations Management have suggested that shop scheduling should be an integral component in both the strategic and tactical plans for an organization's assets. This paper examines the use of an accepted measure of return on assets, net present value (NPV), in a simulated shop scheduling environment where early shipment of jobs before their due dates is forbidden. In addition, early shipment of raw materials to the shop is also forbidden. This shop environment is consistent with the prevalent practice in industry of accepting orders only on a just-in-time basis to reduce purchased parts inventories. The NPV measure provides a means of balancing a variety of performance criteria that have been treated as separate objectives previously, including work-in-process inventory, finished goods inventory, mean flow time and mean tardiness, while also providing a means of measuring monetarily the value of various shop scheduling approaches.The NPV performance of priority scheduling rules and order release policies is measured in this research through the simulation of a random job shop under a variety of environmental conditions. It is found in a comparison of priority rules that use time-based information with those that use job value information that the Critical Ratio rule provides higher average performance than the three other rules used in the study. However, in some situations that are consistent with JIT practice, value-based priority rules also perform well. The use of a mechanism for delaying the release of jobs to each work center in the shop provided higher average NPV when shop utilization was set at a low level of 80%, while immediate release of work upon its arrival to the shop provided superior performance at a higher shop utilization level of 94%. While JIT materials delivery and costing yields higher NPV, it did not alter the relative ranking of priority rule/release policy combinations. In addition, it was found that environmental factors, including average job length, average number of tasks per job and level of tardiness penalty, resulted in greater variations in NPV performance than the institution of a JIT raw materials delivery policy.  相似文献   

2.
Work center control rules, defined as a combination of job dispatch rules and short-term work center capacity adjustments, are analyzed using queueing theory. Promising rules are evaluated with a job shop simulation model. Simulations comparing work center control rules to the critical ratio rule for job dispatching indicate that work center control can increase performance to customer due date while simultaneously reducing average work in process inventory. The work center control rules are easily implemented by shops currently using input/output control and daily dispatch lists.  相似文献   

3.
One of the management decisions required to operate a dual-constrained job shop is the labor assignment rule. This study examines the effects of various labor assignment rules on the shop's performance. Eleven different labor assignment rules are simulated. A longest-queue rule and the traditional counterparts of the first-in-system, first-served, shortest operation time, job due date, critical ratio and shortest processing time dispatching rules are used to determine to which work center available workers should be transferred. Also tested are five new labor assignment rules that use an average of the priority values of all jobs in queue at a particular work center to determine whether that work center should receive the available worker.A SIMSCRIPT simulation program that models nine work centers provided the mechanism by which these rules were tested. Five dispatching rules, the counterparts of the five “traditional counterpart” labor assignment rules mentioned earlier, provided different shop environments. Also, the level of staffing of the work centers was altered to provide additional ship environments. Staffing levels of 50% and 67% were employed.The results show that none of the eleven labor assignment rules had a significant impact on shop performance. This is an important result because it implies that a manager can make the labor assignment decision based on other criteria such as ease or cost of application of the rules. These results were relatively insensitive to the shop environment, as represented by the dispatching rule and the staffing level.  相似文献   

4.
In many plants, the performance of shop floor workers is measured by accounting-based productivity criteria. Such systems encourage workers to maximize their individual performance, often at the expense of total shop performance. One such company, Union Switch & Signal, a manufacturer of railroad equipment, has decided to increase finished goods inventory in an effort to counteract poor due date performance. Management at Union Switch & Signal feel that workers not following priorities contribute significantly to this poor performance. It has been suggested that the controlled release of jobs into the shop, i.e., Order Review/Release (ORR), may provide the operations manager a vehicle for enforcing job priorities when formal dispatching rules are not strictly followed by workers. In this study, two ORR methodologies are studied in regards to their ability to offset the dysfunctional behavior by workers who seek to maximize their own individual productivity. This type of behavior was captured by simulating the phenomenon of `cherry picking'. Cherry picking occurs when a job is selected for processing based not on its formal priority but on the difference in standard allowable processing time and actual processing time. Results suggest that at least one ORR methodology is able to reduce the difference in resulting labor productivity while improving overall shop performance.  相似文献   

5.
This article addresses the question of accuracy of planned lead times (PLTs) that are used with a material requirements planning system. Lead time error is defined as the difference between an item's PLT and the actual lead time (flow time) of an order to replenish the item. Three related topics are discussed: the relationship between system performance and average lead time error, the transient effect on work-in-process (WIP) inventory of increasing PLTs, and the relative accuracy of three methods of determining PLTs. A distinction is made between available and WIP inventory. The former includes any purchased item, fabricated part, assembly, or finished good that is in storage and available for use or delivery. WIP denotes materials associated with open orders on the shop floor.It was concluded that average lead time error has a considerable affect on system performance. PLTs that are on average too long or too short increase available inventory; and the further the average error is from zero, the more pronounced the increase. Contrary to conventional wisdom, increasing PLTs will increase the service level (decrease backorders), unless PLTs are already severely inflated and MPS uncertainty (forecast error) is small. If PLTs are inflated, decreasing them will decrease the number of setups per unit time in the case of considerable demand uncertainty. Contrary to conventional wisdom, increasing PLTs causes only a transient rise WIP inventory.The fact that the average lead time error has a significant effect on the three areas of system effectiveness mentioned above does not imply that a given order's lead time should be managed in a way that forces its actual lead time to match the PLT. Stated another way, the material planner may use the latest information to manage a given order's lead time; however, if the average discrepancy between the actual and planned lead times is large, system performance can be improved by changing the PLTs to approximate the average flow times.Three methods that have been proposed for determining PLTs are compared. They are historical averages of the actual flow times, calculated lead times based on standard times and historical averages of the queuing time at the appropriate work centers, and the QUOAT lead time proposed by Hoyt. The third was found to perform poorly unless the work content of all operations is identical. With one exception, no differences were found between the first two methods. The simpler historical average method was superior to the calculated lead time in the case where the work content of each operation varies and when considerable demand uncertainty exists.The results are based on simulation experiments employing a generalized MRP/Job-Shop stochastic simulation model. The program launches orders based on standard MRP logic, reschedules open orders by moving the due date in or out to coincide with revised need dates, moves manufacturing orders through a job shop, schedules the delivery of purchase orders, and updates inventory levels. The product structure tree contained eight distinct items, with four levels and one end item. There is no reason to believe that the conclusions would be any different had a larger system been studied.  相似文献   

6.
Material Requirements Planning (MRP) systems have been widely applied in industry to better manage multiproduct, multistage production environments. Although many applications have been quite successful, much is still left to the planner's intuition as to how to assure that master schedules, component lot sizes, and priorities realistically conform to the capacity limits at individual work centers. Capacity issues may indeed be the soft spot in MRP logic.This paper explores some possible causes of irregular workload patterns when using an MRP system. Better insight on which factors cause temporary bottlenecks could help managers better assess the vulnerability of their plants to this problem. It might also suggest ways of dampening peaks and valleys. The problem setting is a multistage environment; several products are made from various subassemblies and parts. Each shop order is routed through one or more capacitated work centers. An order is delayed either by temporary capacity shortages or the unavailability of components. Of course, the second delay can be caused by capacity problems previously encountered by the shop orders of its components.Seven experimental factors are tested with a large-scale simulator, and five performance measures are analyzed. The factors are the number of levels in the bill of material, the average load on the shop, the average lot size, the choice of priority rule, demand variability, the use of a gateway department, and the degree of equipment specialization. We have one measure of customer service, two for inventory, and two for workload. The workload measures are unconventional, since our interest is when workload variability occurs and how it affects inventory and customer service.The simulator has been developed over the course of eight years, and since this study has been further enhanced to handle many more factors. The simulator was validated recently with real data at two manufacturing plants. It is quite general, in that the bills of material, shop configuration, routings, worker efficiencies, and operating rules can be changed as desired.An initial screening experiment was performed, whereupon the average load and priority rules were not statistically significant at even the .05 level. A full factorial analysis with two replications was then conducted on the five remaining factors. Multivariate analysis of variance (MANOVA) and analysis of variance (ANOVA) statistical tests have been performed.The results confirm that workload variability can have a detrimental impact on customer service and inventory. The following structural changes to the manufacturing system can be beneficial, but tend to be more difficult to achieve. More BOM levels improve customer service, but increase inventory and capacity bottlenecks. Resource flexibility is a powerful tool to reduce workload variability. Capacity slack averaging much over 10% is wasteful, having no benefits for inventory and customer service. In general, revising the routing patterns only, such as creating more dominant paths, will not give big payoffs. The following procedural changes are easier to implement. Master schedules which smooth aggregate resources are an excellent device to reduce workload variability. Even with a smooth MPS, debilitating workload variability can still occur due to the design of the BOM, lot size, and leadtime offset parameters. Selecting a priority rule does not seem to be of overriding importance compared to master scheduling and component lot sizing. These findings must be considered within the context of the range of plant environments encompassed by this study.  相似文献   

7.
Available lot sizing rules for use in MRP (Material Requirements Planning) systems ignore capacity limitations at various work centers when sizing future orders. Planned order releases are instead determined by the tradeoff only between the item's set up and inventory holding costs. This limitation can cause unanticipated overloads and underloads at the various work centers, along with higher inventories, poorer customer service, and excessive overtime.This research explores one way to make MRP systems more sensitive to capacity limitations at the time of each regeneration run. A relatively simple heuristic algorithm is designed for this purpose. The procedure is applied to those planned order releases that standard MRP logic identifies as mature for release. The lot sizes for a small percentage of these items are increased or decreased so as to have the greatest impact in smoothing capacity requirements at the various work centers in the system. This algorithm for better integrating material requirements plans and capacity requirements plans is tested with a large scale simulator in a variety of manufacturing environments. This simulator has subsequently undergone extensive tests, including its successful validation with actual data at a large plant of major corporations.Simulation results show that the algorithm's modest extension to MRP logic significantly helps overall performance, particularly with customer service. For a wide range of test environments, past due orders were reduced by more than 30% when the algorithm was used. Inventory levels and capacity problems also improved. Not surprisingly, the algorithm helps the most (compared to not using it at all as an MRP enhancement) in environments in which short-term bottlenecks are most severe. Large lot sizes and tight shop capacities are characteristic of these environments. The algorithm works the best when forecast errors are not excessive and the master schedule is not too “nervous.”This proposed procedure is but one step toward making MRP more capacity sensitive. The widely heralded concept of “closed-loop” MRP means that inventory analysts must change or “fix up” parts of the computer generated material requirements plan. What has been missing is a tool for identifying the unrealistic parts of the plan. Our algorithm helps formalize this identification process and singles out a few planned order releases each week. This information comes to the analyst's attention as part of the usual action notices. These pointers to capacity problems go well beyond capacity requirements planning (CRP) and would be impossible without computer assistance.Our study produced two other findings. First, short-term bottlenecks occur even when the master production schedule is leveled. The culprits are the lot sizing choices for items at lower levels in the bills of material. “Rough-cut” capacity planning, such as resource requirements planning, therefore is not a sufficient tool for leveling capacity requirements. It must be supplemented by a way to smooth bottlenecks otherwise caused by shop orders for intermediate items. Second, the disruptive effect of large lot sizes is apparent, both in terms of higher inventories and worse customer service. Large lot sizes not only inflate inventories, but paradoxically hurt customer service because they create more capacity bottlenecks. The only reason why management should prefer large lot sizes is if set-up times are substantial and cannot be efficiently reduced. This finding is very much in step with the current interest in just-in-time (JIT) systems.  相似文献   

8.
Increasing human and social capital by applying job embeddedness theory   总被引:4,自引:0,他引:4  
Most modern lives are complicated. When employees feel that their organization values the complexity of their entire lives and tries to do something about making it a little easier for them to balance all the conflicting demands, the employees tend to be more productive and stay with those organizations longer. Job embeddedness captures some of this complexity by measuring both the on-the-job and off-the-job components that most contribute to a person's staying. Research evidence as well as ample anecdotal evidence (discussed here and other places) supports the value of using the job embeddedness framework for developing a world-class retention strategy based on corporate strengths and employee preferences.To execute effectively their corporate strategy, different organizations require different knowledge, skills and abilities from their people. And because of occupational, geographic, demographic or other differences, these people will have needs that are different from other organizations. For that reason, the retention program of the week from international consultants won’t always work. Instead, organizations need to carefully assess the needs/desires of their unique employee base. Then, these organizations need to determine which of these needs/desires they can address in a cost effective fashion (confer more benefits than the cost of the program). Many times this requires an investment that will pay off over a longer term – not just a quarter or even year. Put differently, executives will need to carefully understand the fully loaded costs of turnover (loss of tacit knowledge, reduced customer service, slowed production, lost contracts, lack of internal candidates to lead the organization in the future, etc., in addition to the obvious costs like recruiting, selecting and training new people). Then, these executives need to recognize the expected benefits of various retention practices. Only then can leaders make informed decisions about strategic investments in human and social capital.

Selected bibliography

A number of articles have influenced our thinking about the importance of connecting employee retention strategies to business strategies:
• R. W. Beatty, M. A. Huselid, and C. E. Schneier. “New HR Metrics: Scoring on the Business Scorecard,” Organizational Dynamics, 2003, 32 (2), 107–121.
• Bradach. “Organizational Alignment: The 7-S Model,” Harvard Business Review, 1998.
• J. Pfeffer. “Producing Sustainable Competitive Advantage Through the Effective Management of People,” Academy of Management Executive, 1995 (9), 1–13.
• C. J. Collins, and K. D. Clark. “Strategic Human Resources Practices and Top Management Team Social Networks: An Examination of the Role of HR Practices in Creating Organizational Competitive Advantage,” Academy of Management Journal, 2003, 46, 740–752.
The theoretical development and empirical support for the Unfolding Model of turnover are captured in the following articles:
• T. Lee, and T. Mitchell. “An Alternative Approach: The Unfolding Model of Voluntary Employee Turnover,” Academy of Management Review, 1994, 19, 57–89.
• B. Holtom, T. Mitchell, T. Lee, and E.Inderrieden. “Shocks as Causes of Turnover: What They Are and How Organizations Can Manage Them,” Human Resource Management, 2005, 44(3), 337–352.
The development of job embeddedness theory is captured in the following articles:
• T. Mitchell, B. Holtom, T. Lee, C. Sablynski, and M. Erez. “Why People Stay: Using Job Embeddedness to Predict Voluntary Turnover,” Academy of Management Journal, 2001, 44, 1102–1121.
• T. Mitchell, B. Holtom, and T. Lee. “How To Keep Your Best employees: The Development Of An Effective Retention Policy,” Academy of Management Executive, 2001, 15(4), 96–108.
• B. Holtom, and E. Inderrieden. “Integrating the Unfolding Model and Job Embeddedness To Better Understand Voluntary Turnover,” Journal of Managerial Issues, in press.
• D.G. Allen. “Do Organizational Socialization Tactics Influence Newcomer Embeddedness and Turnover?” Journal of Management, 2006, 32, 237–257.
Executive SummaryEmployee turnover is costly to organizations. Some of the costs are obvious (e.g., recruiting, selecting, and training expenses) and others are not so obvious (e.g., diminished customer service ability, lack of continuity on key projects, and loss of future leadership talent). Understanding the value inherent in attracting and keeping excellent employees is the first step toward investing systematically to build the human and social capital in an organization. The second step is to identify retention practices that align with the organization's strategy and culture. Through extensive research, we have developed a framework for creating this alignment. We call this theory job embeddedness. Across multiple industries, we have found that job embeddedness is a stronger predictor of important organizational outcomes, such as employee attendance, retention and performance than the best, well-known and accepted psychological explanations (e.g., job satisfaction and organizational commitment). The third step is to implement the ideas. Throughout this article we discuss examples from the Fortune 100 Best Companies to Work For and many others to demonstrate how job embeddedness theory can be used to build human and social capital by increasing employee retention.  相似文献   

9.
Over the last several years expert systems (ES) have gained almost sensational interest. Within business administration, production management might be one of the most fruitful application areas for ES. There already exist a number of interesting pilot systems, and reports of research projects are beginning to appear in the literature.The main goal of this study is to identify systematically those areas in production management where an ES approach might be most promising. This is important to both researchers and practitioners because it helps pinpoint where research and development resources would be best allocated.In this article the authors provide a taxonomy for production management activities. They then combine this taxonomy with a well-known list of eight “expert tasks” to provide what they call an “applications map” to guide the discussion.After discussing existing research efforts and potential production management applications of expert systems, the authors employ a Likert scoring procedure to quantify their subjective ratings as to problem importance, potential for improved solution, and ease of development, for expert systems development efforts in a given production management decision situation.One conclusion here is that the applicability of expert systems to production management appears to be broadly based. This is particularly true for what the authors have labeled as “technological” activities. An interesting finding is the apparent lack of applicability of expert systems to inventory management. The authors found no existing system or research proposals applying expert systems to inventory management. Finally, systems that combine technological with logistical knowledge seem to be a fertile (but difficult) application area for ES.  相似文献   

10.
Complex systems that are required to perform very reliably are often designed to be “fault-tolerant,” so that they can function even though some component parts have failed. Often fault-tolerance is achieved through redundancy, involving the use of extra components. One prevalent redundant component configuration is the m-out-of-n system, where at least m of n identical and independent components must function for the system to function adequately.Often machines containing m-out-of-n systems are scheduled for periodic overhauls, during which all failed components are replaced, in order to renew the machine's reliability. Periodic overhauls are appropriate when repair of component failures as they occur is impossible or very costly. This will often be the case for machines which are sent on “missions” during which they are unavailable for repair. Examples of such machines include computerized control systems on space vehicles, military and commercial aircraft, and submarines.An interesting inventory problem arises when periodic overhauls are scheduled. How many spare parts should be stocked at the maintenance center in order to meet demands? Complex electronic equipment is rarely scrapped when it fails. Instead, it is sent to a repair shop, from which it eventually returns to the maintenance center to be used as a spare. A Markov model of spares availability at such a maintenance center is developed in this article. Steady-state probabilities are used to determine the initial spares inventory that minimizes total shortage cost and inventory holding cost. The optimal initial spares inventory will depend upon many factors, including the values of m and n, component failure rate, repair rate, time between overhauls, and the shortage and holding costs.In a recent paper, Lawrence and Schaefer [4] determined the optimal maintenance center inventories for fault-tolerant repairable systems. They found optimal maintenance center inventories for machines containing several sets of redundant systems under a budget constraint on total inventory investment. This article extends that work in several important ways. First, we relax the assumption that the parts have constant failure rates. In this model, component failure rates increase as the parts age. Second, we determine the optimal preventive maintenance policy, calculating the optimal age at which a part should be replaced even if it has not failed because the probability of subsequent failure has become unacceptably high. Third, we relax the earlier assumption that component repair times are independent, identically distributed random variables. In this article we allow congestion to develop at the repair shop, making repair times longer when there are many items requiring repair. Fourth, we introduce a more efficient solution method, marginal analysis, as an alternative to dynamic programming, which was used in the earlier paper. Fifth, we modify the model in order to deal with an alternative objective of maximizing the job-completion rate.In this article, the notation and assumptions of the earlier model are reviewed. The requisite changes in the model development and solution in order to extend the model are described. Several illustrative examples are included.  相似文献   

11.
This study investigates how process choice relates to production planning and inventory control decisions. We empirically examine the validity of deductively derived patterns about these types of decisions. More importantly, we look for normative insights by exploring how production planning and inventory control decisions affect operations performance. Our findings show that production line and continuous flow plants use more of a level production strategy, and carry less raw material and work-in-process inventory. The performance drivers for these plants, through which the operations function excels, are effective utilization of equipment, reduced finished goods inventory, and reduced setup down time. To gain forward demand visibility and batching economies, job and batch shops rely much more on backlogs in their planning process. These plants use more of a production chase strategy and position inventory lower in the bills of materials. Four performance drivers for top-performing job and batch shops are to find ways that better anticipate customers' orders, have a more responsive chase strategy, carry less raw material or purchased inventory, and shorten production planning horizon, partly through less reliance on backlogs. It is intriguing that top-performing plants not only do the expected things, given their choice of process, but also excel in selected dimensions—some of which fit the profile normally associated with a different process choice. To monitor and continuously improve operations, evaluation ‘scorecards’ should pay particular attention to performance drivers, which change depending on the plant's process choice.  相似文献   

12.
E-Leadership and Virtual Teams   总被引:1,自引:0,他引:1  
In this paper we have identified some key challenges for E-leaders of virtual teams. Among the most salient of these are the following:
• The difficulty of keeping tight and loose controls on intermediate progress toward goals
• Promoting close cooperation among teams and team members in order to integrate deliverables
• Encouraging and recognizing emergent leaders in virtual teams
• Establishing explicit processes for archiving important written documentation
• Establishing and maintaining norms and procedures early in a team’s formation and development
• Establishing proper boundaries between home and work
Virtual team environments magnify the differences between good and bad projects, organizations, teams, and leaders. The nature of such projects is that there is little tolerance for ineffective leadership. There are some specific issues and techniques for mitigating the negative effects of more dispersed employees, but these are merely extensions of good leadership—they cannot make up for the lack of it.

SELECTED BIBLIOGRAPHY

An excellent reference for research on teams is M. E. Shaw, R. M. McIntyre, and E. Salas, “Measuring and Managing for Team Performance: Emerging Principles from Complex Environments,” in R. A. Guzzo and E. Salas, eds., Team Effectiveness and Decision Making in Organizations (San Francisco: Jossey-Bass, 1995). For a fuller discussion of teleworking and performance-management issues in virtual teams, see W. F. Cascio, “Managing a Virtual Workplace,” Academy of Management Executive, 2000, 14(3), 81–90, and also C. Joinson, “Managing Virtual Teams,” HRMagazine, June 2002, 69–73. Several sources discuss the issue of trust in virtual teams: D. Coutu, “Trust in Virtual Teams,” Harvard Business Review, May–June 1998, 20–21; S. L. Jarvenpaa, K. Knoll, and D. E. Leidner, “Is Anybody Out There? Antecedents of Trust in Global Virtual Teams,” Journal of Management Information Systems, 1998, 14(4), 29–64. See also Knoll and Jarvenpaa, “Working Together in Global Virtual Teams,” in M. Igbaria and M. Tan, eds., The Virtual Workplace (Hershey, PA: Idea Group Publishing, 1998).Estimates of the number of teleworkers vary. For examples, see Gartner Group, Report R-06-6639, November 18, 1998, and also Telework America survey, news release, October 23, 2001. We learned about CPP’s approach to managing virtual work arrangements through David Krantz, personal communication, August 20, 2002, Palo Alto, CA.There are several excellent references on emergent leaders. For example, see G. Lumsden and D. Lumsden, Communicating in Groups and Teams: Sharing Leadership (Belmont, CA: Wadsworth, 1993); Lumsden and Lumsden, Groups: Theory and Experience, 4th ed. (Boston: Houghton, 1993); R. W. Napier and M. K. Gershenfeld, Groups: Theory and Experience, 4th ed. (Boston: Houghton, 1989); and M. E. Shaw, Group Dynamics: The Psychology of Small Group Behavior, 3rd ed. (New York: McGraw-Hill, 1981).An excellent source for e-mail style is D. Angell and B. Heslop, The Elements of E-mail Style: Communicate Effectively via Electronic Mail (Reading, MA: Addison-Wesley Publishing Company, 1994). To read more on the growing demand for flexible work arrangements, see “The New World of Work: Flexibility is the Watchword,” Business Week, 10 January 2000, 36.For more on individualism and collectivism, see H. C. Triandis, “Cross-cultural Industrial and Organizational Psychology,” in H. C. Triandis, M. D. Dunnette, and L. M. Hough, eds., Handbook of Industrial and Organizational Psychology, 2nd ed., vol. 4 (Palo Alto, CA: Consulting Psychologists Press, 1994, 103–172).Executive SummaryAs the wired world brings us all closer together, at the same time as we are separated by time and distance, leadership in virtual teams becomes ever more important. Information technology makes it possible to build far-flung networks of organizational contributors, although unique leadership challenges accompany their formation and operation. This paper describes the growth of virtual teams, the various forms they assume, the kinds of information and support they need to function effectively, and the leadership challenges inherent in each form. We then provide workable, practical solutions to each of the leadership challenges identified.  相似文献   

13.
In this paper we study the effect of a micro-level measure of flexicurity on workers' job satisfaction. To this end, using micro-data from the Eurobarometer survey, we disaggregate the sample of workers into different groups according not only to their employment contract (i.e. permanent or temporary), but also to their perceived job security, and we evaluate differences in job satisfaction between these groups. After the potential endogeneity of job type has been controlled for, the results show that what matters for job satisfaction is not just the type of contract, but mainly the perceived job security, which may be independent of the type of contract.The combination “temporary but secure job” seems preferable to the combination “permanent but insecure job”, indicating that the length of the contract may be less important if the worker perceives that s/he is not at risk of becoming unemployed. Our main conclusions are robust to the use of alternative definitions of workers' types and they generally hold within different welfare regimes and also for different aspects of job satisfaction, mainly those more related to job security.  相似文献   

14.
This study investigates the behavior of a job shop depicted as an integral component of a firm. A market places demands for the firm's products by dynamically evaluating the organization's quoted delivery times and actual delivery performance. The closed-loop model simulated in this study is described and the salient research results are reported. These experimental outcomes suggest that other conventional open-loop job shop studies tend to neglect important interactions with factors external to the shop itself.  相似文献   

15.
We propose and develop a scheduling system for a very special type of flow shop. This flow shop processes a variety of jobs that are identical from a processing point of view. All jobs have the same routing over the facilities of the shop and require the same amount of processing time at each facility. Individual jobs, though, may differ since they may have different tasks performed upon them at a particular facility. Examples of such shops are flexible machining systems and integrated circuit fabrication processes. In a flexible machining system, all jobs may have the same routing over the facilities, but the actual tasks performed may differ; for instance, a drilling operation may vary in the placement or size of the holes. Similarly, for integrated circuit manufacturing, although all jobs may follow the same routing, the jobs will be differentiated at the photolithographic operations. The photolitho-graphic process establishes patterns upon the silicon wafers where the patterns differ according to the mask that is used.The flow shop that we consider has another important feature, namely the job routing is such that a job may return one or more times to any facility. We say that when a job returns to a facility it reenters the flow at that facility, and consequently we call the shop a re-entrant flow shop. In integrated circuit manufacturing, a particular integrated circuit will return several times to the photolithographic process in order to place several layers of patterns on the wafer. Similarly, in a flexible machining system, a job may have to return to a particular station several times for additional metal-cutting operations.These re-entrant flow shops are usually operated and scheduled as general job shops, ignoring the inherent structure of the shop flow. Viewing such shops as job shops means using myopic scheduling rules to sequence jobs at each facility and usually requires large queues of work-in-process inventory in order to maintain high facility utilization, but at the expense of long throughput times.In this paper we develop a cyclic scheduling method that takes advantage of the flow character of the process. The cycle period is the inverse of the desired production rate (jobs per day). The cyclic schedule is predicated upon the requirement that during each cycle the shop should perform all of the tasks required to complete a job, although possibly on different jobs. In other words, during a cycle period we require each facility to do each task assigned to it exactly once. With this requirement, a cyclic schedule is just the sequencing and timing on each facility of all of the tasks that that facility must perform during each cycle period. This cyclic schedule is to be repeated by each facility each cycle period. The determination of the best cyclic schedule is a very difficult combinatorial optimization problem that we cannot solve optimally for actual operations. Rather, we present a computerized heuristic procedure that seems very effective at producing good schedules. We have found that the throughput time of these schedules is much less than that achievable with myopic sequencing rules as used in a job shop. We are attempting to implement the scheduling system at an integrated circuit fabrication facility.  相似文献   

16.
Job shop scheduling usually includes the process of selecting dispatch rules for loading shops with work. Traditionally, dispatch rules have been formed on the basis of processing time, operating time, or queueing order. A job shop scheduling model was developed to include external factors (such as due dates), internal factors (e.g., capacity), as well as influence factors (e.g., job status). Based on the model developed in this report a survey of industrial engineers, shop foremen, and production control supervisors was undertaken to determine what dispatch rules experienced job shop schedulers would select and if the selection process could be influenced by schedule conditions (status) or other organizational factors. Results suggest that schedulers may be influenced by other factors. This article suggests a model for further research with respect to job shop scheduling.  相似文献   

17.
This article examines recent research on occupational segregation by gender. It reviews and evaluates statistical approaches to measuring the extent to which women are disproportionately represented in “women’s jobs” and men in “men’s jobs.” By combining the findings of a number of studies, it traces the changes in the extent in occupational segregation from the end of the nineteenth century until 1995, and the forms and extent of gender segregation in occupations cross-nationally. In addition to the trends, this article considers the consequences of segregation to women. Finally, current explanations for occupational segregation are analyzed and assessed by considering the empirical data on occupational segregation.  相似文献   

18.
In this paper, some new indices for ordinal data are introduced. These indices have been developed so as to measure the degree of concentration on the “small” or the “large” values of a variable whose level of measurement is ordinal. Their advantage in relation to other approaches is that they ascribe unequal weights to each class of values. Although, they constitute a useful tool in various fields of applications, the focus here is on their use in sample surveys and specifically in situations where one is interested in taking into account the “distance” of the responses from the “neutral” category in a given question. The properties of these indices are examined and methods for constructing confidence intervals for their actual values are discussed. The performance of these methods is evaluated through an extensive simulation study.  相似文献   

19.
Group technology is a manufacturing philosophy that attempts to provide some of the operational advantages of a line layout while maintaining some of the strategic advantages of the job shop layout. In designing a productive process that will adopt this manufacturing strategy, one of the primary problems encountered is the formation of component families and production cells. The production cell is a group of machines or processes of functionally dissimilar types that are placed together and dedicated to the manufacture of a specific range of component families.Several researchers in operations management have proposed methods of forming production cells and component families. These methods differ in terms of information requirements and also in terms of the final cell design. Furthermore, the objectives for each method are quite different and it thus seems that the focus has been on the method rather than its appropriateness in a particular situation. This article reviews some of the most publicized methods of group formation and analyzes the type of cells that could be formed using these methods. Subsequently, an evaluative framework is presented where the relative advantages of each type of production cell are discussed in terms of several strategic and operational factors. This framework is of particular use as it highlights the fact that in implementing a cellular manufacturing system, most organizations will face a trade-off of strategic and operational “costs.” Finally, the appropriateness of the cell types with respect to the degree of customer interaction is also discussed.  相似文献   

20.
For many production systems, delivery performance relative to promised job due dates is a critical evaluation criterion. Delivery performance is affected by the way in which work is dispatched on the shop floor, and also by the way the job due dates are assigned to begin with. This pape shows how information regarding congestion levels on the shop floor can be used to assign due dates to arriving jobs in such a way that the mean tardiness of jobs is decreased without increasing the average length of the promised delivery lead times.Baker and Bertrand suggested a modification of the Total Work (TWK) rule for assigning job due dates which adjusts the job flow allowance according to the level of congestion in the shop. Their method gives longer flow allowances to jobs which arrive when the system is congested. Although their modified TWK rule results in lower mean tardiness in many settings, it also generally results in a higher proportion of jobs tardy.This paper presents an alternative modification of the TWK rule which, in most cases, provides mean tardiness as low as or lower than Baker and Bertrand's rule and also results in a lower proportion of jobs tardy. The alternative rule suggested here still results in higher proportion of tardy jobs than the non-workload adjusted rule in most settings, but suggestions are made for how this problem might be addressed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号