首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A complex systems methodology to transition management   总被引:2,自引:2,他引:0  
There is a general sense of urgency that major technological transitions are required for sustainable development. Such transitions are best perceived as involving multiple transition steps along a transition path. Due to the path dependent and irreversible nature of innovation in complex technologies, an initial transition step along some preferred path may cut off paths that later may turn out to be more desirable. For these reasons, initial transition steps should allow for future flexibility, where we define flexibility as robustness regarding changing evidence and changing preferences. We propose a technology assessment methodology based on rugged fitness landscapes, which identifies the flexibility of initial transition steps in complex technologies. We illustrate our methodology by an empirical application to 2,646 possible future car systems.
Koen FrenkenEmail:
  相似文献   

2.
A cross-national understanding of technology policy decision processes and basic premises underlying technology assessment must be established before an effective technology assessment methodology can be developed to conduct substantive technology assessments on an international scale.  相似文献   

3.
The purpose of this paper is to describe the impact of investment in computers on the growth of the U.S. economy. The economic literature on computers is relatively rich in information on the decline in computer prices and the growth of computer investment. Constant quality price indices for computers have been included in the U.S. National Income and Product Accounts (NIPA) since 1986. These indices employ state of the art methodology to capture the rapid evolution of computer technology.

While the annual inflation rate for overall investment has been 3.66 percent for the period 1958 to 1992, computer prices have declined by 19.13 percent per year! Similarly, overall investment grew at 3.82 percent, while investment in computers increased at an astounding 44.34 percent! These familiar facts describe growth in the output of computers. The objective of this paper is to complete the picture by analyzing the growth of computer services as inputs.

In a pioneering paper Bresnahan (1986) has focused on pecuniary externalities arising from the rapid decline in computer prices. Griliches (1992, 1994) has emphasized the distinction between pecuniary and nonpecuniary externalities in the impact of computer investment on growth. This paper is limited to pecuniary externalities or the impact of reductions in computer prices on the substitution of computer services for other inputs. As Griliches (1992) points out, this is an essential first step in identifying nonpecuniary externalities or ‘spill-overs’ through the impact of a decline in computer prices on productivity growth. * *Brynjolfsson (1993) has proveded a detailed survey of studies of nonpecuniary externalities or ‘spill overs’. Recent studies include those of Brynjolfsson and Hitt (1994a, 1994b) and Lichtenberg (1993).

In two important papers Stephen D. Oliner (1993, 1994) has introduced a model of computer technology that greatly facilitates the measurement of computer services as inputs. In this paper we estimate computer stocks and flows of computer services for all forms of computer investment included in NIPA. We construct estimates of computer services parallel to NIPA data on computer investment by combining these data with information on computer inventories. For example, the International Data Corporation (IDC) Census of Computer Processors includes an annual inventory of processors in the U.S.

In Section 1 we present data on investment in computers and constant quality price indices from NIPA. These data incorporate important innovations in modeling computer technology stemming from a joint study by IBM and the Bureau of Economic Analysis (BEA) completed in 1985. This study utilized a ‘hedonic’ methodology for constructing an econometric model of computer prices that accurately reflects rapid changes in computer technology. This methodology generates an index of computer prices that holds the quality of computers constant.

In Section 2 we present the model of computer services originated by Oliner (1993,1994). This differs in important respects from the model of capital services used in the previous studies of U.S. economic growth surveyed by Jorgenson (1989,1990). The model employed in previous studies is based on the decline in productive capacity with the chronological age of a capital good. Oliner assumes that computers maintain their productive capacity until they are retired. Decline in productive capacity occurs only through removal of used computers from the inventory through retirement.

In Section 3 we construct estimates of stocks of computers that incorporate IDC data on computer inventories and derive the implied flow of computer services. While output of computer investments has grown very rapidly, the input of computer services has grown even faster. The price of these services has declined at 23.22 percent per year over the period 1958 to 1992, while the input of these services has grown at 52.82 percent! This is prima facie evidence of an important role for computer price declines as a source of pecuniary externalities.

In Section 4 we combine computer services with the services of other types of capital to produce a measure of capital input into the U.S. economy. We link this with labor input to obtain the contributions of both inputs to U.S. economic growth, arriving at the growth of productivity as a residual. We find that the contribution of computer services to input into the U.S. economy is far more important than the contribution of computer investments to output. This is a significant step toward resolution of the Solow paradox: ‘We see computers everywhere except in the productivity statistics. * *Robert M. Solow, quoted by Brynjolfsson (1993). Declines in computer prices generate very sizable pecuniary externalities through the substitution of computer services for other inputs. By contrast Solow focuses on nonpecuniary externalities that would appear as productivity growth.

In Section 5 we conclude that information on inventories of computers is critical in quantifying the role of computer services as inputs. The constant quality price indices for computers incorporated into NIPA are also essential. A price index for computers that reflects only general trends in inflation would result in a highly distorted perspective on the growth of GDP and capital services, especially during the past decade. To capture the contribution of all forms of investment to U.S. economic growth, similar price indices should be included in NIPA for capital goods with rapidly evolving technologies, as proposed by Gordon (1990).

The long term goal should be a unified system of income. product, and wealth accounts, like that proposed by Laurits Christensen and Jorgenson (1973) and Jorgenson (1980). This incorporates capital stocks, capital services, and their prices. Achieving this goal will necessitate much greater elaboration of the accounting system described in Section 3. These accounts would incorporate data on prices and quantities of investment, stocks of assets, and capital services for all forms of capital employed in the U.S. economy.  相似文献   

4.
The paper proposes a multi-agent climate-economic model, the “battle of perspectives 2.0”. It is an updated and improved version of the original “battle of perspectives” model, described in Janssen (1996) and Janssen/de Vries (1998). The model integrates agents with differing beliefs about economic growth and the sensitivity of the climate system and places them in environments corresponding or non-corresponding to their beliefs. In a second step, different agent types are ruling the world conjointly. Using a learning procedure based on some operators known from Genetic Algorithms, the model shows how they adapt wrong beliefs over time. It is thus an evolutionary model of climate protection decisions. The paper argues that such models may help in analyzing why cost-minimizing protection paths, derived from integrated assessment models à la Nordhaus/Sztorc (2013), are not followed. Although this view is supported by numerous authors, few such models exist. With the “battle of perspectives 2.0” the paper offers a contribution to their development. Compared to the former version, more agent types are considered and more aspects have been endogenized.  相似文献   

5.
In addition to pointing out that Kumar (1983) omits Hough (1981) and Knight (1983) from its list of reference, Hough (1983) raises two issues of largely statistical nature. 1 1 Omissions are clearly inadvertant. The time lag between publication and the general availability makes ther reference list look more subjective than it actually is. These are:

1. the use of an average (AC) rather than a total (TC) cost curve as the correct statistical device for examining economies of scale, and

2. estimation problems arising from possible heteroscedasticity of the error term in the estimating relation.

Exactly the same issues are also raised in Hough (1981). Notwithstanding our earlier disclaimer (p. 324) that ‘no new grounds are broken on the methodological front’ as well as a clear mention of both problems (Footnotes 1 and 2), Hough is quite correct that the implications of these issues for our results were not spelled out. We take this opportunity to offer some clarifications with respect to the results in Kumar (1983).  相似文献   

6.
This paper extends a broad functional category approach for the study of technological capability progress recently developed and applied to information technology to a second key case—that of energy based technologies. The approach is applied to the same three functional operations—storage, transportation and transformation—that were used for information technology by first building a 100 plus year database for each of the three energy-based functional categories. In agreement with the results for information technology in the first paper, the energy technology results indicate that the functional approach offers a stable methodology for assessing longer time technological progress trends. Moreover, similar to what was found with information technology in the first study, the functional capability for energy technology shows continual—if not continuous—improvement that is best quantitatively described as exponential with respect to time. The absence of capability discontinuities—even with large technology displacement—and the lack of clear saturation effects are found with energy as it was with information. However, some key differences between energy and information technology are seen and these include:
?
Lower rates of progress for energy technology over the entire period: 19-37% annually for Information Technology and 3-13% for Energy Technology.
?
Substantial variability of progress rates is found within given functional categories for energy compared to relatively small variation within any one category for information technology. The strongest variation is found among capability progress among different energy types.
?
More challenging data recovery and metric definition for energy as compared to information technology.
These findings are interpreted in terms of fundamental differences between energy and information including the losses and efficiency constraints on energy. We apply Whitney's insight that these fundamental differences lead to naturally modular information technology artifacts. The higher progress rates of information-based as opposed to energy-based technologies follows since decomposable systems can progress more rapidly due to the greater ease of independent as opposed to simultaneous development. In addition, the broad implications of our findings to studies of the relationships between technical and social change are briefly discussed.  相似文献   

7.
The paper describes a systematic methodology that combines futures literacy and design thinking to enable the collective discovery of new and disruptive business niches. It is a participatory approach centred on design know-how, which promotes innovative forms of engagement and articulation. The proposed methodology balances experience in designing and applying foresight approaches and futures literacy knowledge labs together with a multidisciplinary understanding of institutional context.

The methodology fosters decision making processes that embrace complexity and treat uncertainty as a resource, thus improving an organisations’ capacity to use the future to expand its understanding of the present. It has been applied at the Center for Strategic Studies and Management (CGEE), an organisation where institutionalised foresight and technology assessment takes place in Brazil, especially in support of Science, Technology and Innovation (STI) policy design and implementation, as well as evaluation. However, its clients also include different ministries within government and industries alike.

The article outlines the ways in which the organisation involved all its collaborators in jointly rethinking its future, building upon collective intelligence, narrative building, sense making, framing and reframing. The design principles called for these experiments to follow a collective learning curve that enable a renewed focus on systemic and transformative innovation. The crafting of new strategic questions was inspired by jointly expanding the understanding of the imaginary futures of the interrelated systems in which the organisation might play a role. As a consequence, new and disruptive possible roles for the institution were identified. These insights then informed the assessment and choices for the redesign of the business strategy.

This paper presents the methodology for combining design thinking and futures literacy, the application of this methodology to CGEE, and the major findings of the overall exercise. Readers will find out about the impact of this exercise on the organisation’s approach to both its own strategic positioning and to the design and implementation of foresight and strategic studies. The paper concludes by outlining the implications of the proposed methodology for foresight practice.  相似文献   


8.
There is a general and growing displeasure with the commonly used methods by which hospital output is measured, and, therefore, with the methods for measuring hospital costs. 1 1Discussions of output measurement problems may be found in REDER (1965), BERRY (1967), SOMERS and SOMERS (1967), LAVE and LAVE (1971), NEWHOUSE (1970), RAFFERTY (1971), and LEE and WALLACE (1971), among others. This disaffection springs largely from the questionable assumption of homogeneity that is implied when output is measured in the traditional units (number of patients or patient-days of care), for it is increasingly evident that output is not homogeneous in this respect. 2 2For example, Lave and Lave have shown that hospitals do differ in the mix of the case-types which they treat (1971), and I have shown previously that case-type proportions vary in the short-run within the individual hospital system (1971 and 1972). Thus, attention is turning increasingly toward the measurement and analysis of case-mix behaviour, since these variations in the patient census reflect variations in output mix itself.

This study is limited to just one of the problems which arise in connection with any case-mix analysis, the problem of how to specify the case-types. Case-types may be identified by means of a few very broad categories or on the basis of the several thousand specific diagnoses, but there exists some trade-off between the degree of specificity and the ease of obtaining and handling the requisite empirical data. In the hope of facilitating future research efforts in this area, this paper examines several alternative methods of specifying the case-types, for the limited purpose of identifying differences among them in their sensitivity to case-mix variations.  相似文献   

9.
In 2013, there was a joint commitment to “long term strategic EU-Russia energy cooperation”.11. EU/RF Roadmap, ‘Roadmap EU-Russia Energy Cooperation until 2050‘, European Commission and Russian Government, March 2013, p. 4, available at <https://ec.europa.eu/energy/sites/ener/files/documents/2013_03_eu_russia_roadmap_2050_signed.pdf>.View all notes Whilst centred on oil and gas, it is noted that “the importance of renewables for EU-Russia energy relations should grow too”,22. Ibid., p. 21.View all notes and that for energy efficiency, “cooperation potential is immense and could… contribute to the objective of a Pan-European energy area”.33. Ibid., p. 26.View all notes Given this shared objective, this article analyses EU and Russian energy decarbonisation policy objectives and considers the potential for a supplementary trade relationship based on renewable energy flows and decarbonisation-related technology, as well as the implications for existing energy trade. Despite declarative statements of mutual interest, shared objectives and cooperation in decarbonisation policy, there has been very limited cooperation by early 2016. The EU has set ambitious plans to decarbonise its economy and energy sector by 2050. However, in Russia energy policy is dominated by hydrocarbon exports, decarbonisation targets are modest, and there are major problems with their implementation. The drivers of EU and Russian energy policies are evaluated, and the argument advanced is that different understandings of energy security and types of energy governance provide major obstacles to decarbonisation cooperation and trade. However, it is argued that ideas about energy policy and security are contested and subject to change and there exists significant potential for mutual gain and cooperation in the longer term.  相似文献   

10.
In technological forecasting and futures research on social change, the term wild card (a.k.a. disruptor or STEEP surprise), traditionally refers to a plausible future event that is estimated to have low probability but high impact should it occur.This article introduces:
1.
A Type II Wild Card, defined as having high probability and high impact as seen by experts if present trends continue, but low credibility for non-expert stakeholders of importance.
2.
A four-level typology of wild cards, leading to a systematic methodology for monitoring the emerging awareness and credibility of high probability disruptors and for assessment of stakeholder-specific views about them.
An informal pilot test of the methodology both indicated that the approach has practical value, and highlighted the importance of highly plausible tipping points which could rapidly lead to massive disruption, either toward collapse or reformation in the complex adaptive systems (CAS) making up human civilization.For reasons of historical continuity, wild card-related nomenclature is used throughout the majority of this article although the term STEEP Surprise is advocated for further work. (STEEP being a frequently used acronym denoting five conceptual sectors of importance.)Suggestions for further work include:
Research on how to diminish the discounting of Type II phenomena by institutional leaders
Monitoring of transitions in the perceived credibility of critical Type II STEEP Surprises by thought leaders
A Snowball Survey of wisdom leaders having multidisciplinary expertise from all walks of life to identify specific Type II possibilities (especially positive ones), they see as having greatest importance
A Cooperative Clearinghouse on STEEP Surprises for sharing of intelligence on highly probable/highly disruptive events, together with plausible impacts and proactive policies.
  相似文献   

11.
12.
Technical and environmental efficiency of some coal-fired thermal power plants in India is estimated using a methodology that accounts for firm’s efforts to increase the production of good output and reduce pollution with the given resources and technology. The methodology used is directional output distance function. Estimates of firm-specific shadow prices of pollutants (bad outputs), and elasticity of substitution between good and bad outputs are also obtained. The technical and environmental inefficiency of a representative firm is estimated as 0.06 implying that the thermal power generating industry in Andhra Pradesh state of India could increase production of electricity by 6/ while decreasing generation of pollution by 6%. This result shows that there are incentives or win–win opportunities for the firms to voluntarily comply with the environmental regulation. It is found that there is a significant variation in marginal cost of pollution abatement or shadow prices of bad outputs across the firms and an increasing marginal cost of pollution abatement with respect to pollution reduction by the firms. This result calls for the use of economic instruments like pollution taxes instead of command and control regulation used currently in India to reduce air pollution.
M. N. MurtyEmail:
  相似文献   

13.
The goal of this article is to point out the likely reasons for the differences between the results obtained by Roxana Radulescu and David Barlow 1 1 Radulescu, R. and Barlow, D. (2002). Economics of Transition, 10(3), pp. 719–45. and those presented in most other research papers on the growth determinants in the post‐communist countries. The authors also present the consequences of the impact of the specific composition of the EBRD index on the results of analysis obtained on the basis of econometric models, in which the index is used as a variable.  相似文献   

14.
The application of the rational choice postulate to a political context invariably leads to the conclusion that most voters are ill informed when making the decision on whom to vote for. In this paper, the authors conduct an empirical evaluation of the rational ignorance theory, based on the model developed by (Rogoff and Sibert Rev Econ Stud LV:1–16, (1988) and by considering that better informed voters reward political candidates who show better performances. The levels of performance are established through the construction of an empirical frontier using the Data Envelopment Analysis (DEA) methodology. According to our results, based on the 1997 Portuguese local elections, even though swing voters do not necessarily behave as rationally ignorant voters, a large majority of voters are rationally ignorant.
José da Silva CostaEmail:
  相似文献   

15.
Strictly proper scoring rules are designed to truthfully elicit subjective probabilistic beliefs from risk neutral agents. Previous experimental studies have identified two problems with this method: (i) risk aversion causes agents to bias their reports toward the probability of \(1/2\), and (ii) for moderate beliefs agents simply report \(1/2\). Applying a prospect theory model of risk preferences, we show that loss aversion can explain both of these behavioral phenomena. Using the insights of this model, we develop a simple off-the-shelf probability assessment mechanism that encourages loss-averse agents to report true beliefs. In an experiment, we demonstrate the effectiveness of this modification in both eliminating uninformative reports and eliciting true probabilistic beliefs.  相似文献   

16.
The explanation of state and local government expenditures has received considerable attention since Fabricant's study Trends in Government Activity Since 1900. These studies have been subject to at least two important shortcomings. One of their limitations stems from the estimation procedures used, while the other is the result of an incomplete model of the process underlying the determination of such expenditures. For the most part, past studies have used either cross-sectional data for a particular year or time series data for a single state. Consequently, the explanations resulting from these analyses either fail to capture the dynamic aspects of the problem in the first case, or remain localized to a particular state in the second. Since expenditure decisions are influenced by both historical events acting through time and economic, political, and demographic factors working at a point in time, studies which fail to integrate both types of information into the estimation process are imcomplete.

The purpose of this paper is to suggest a methodology for using both types of information. Accordingly, the resulting technique is a more efficient approach for estimating state and local government expenditure determinants. The technique is a generalized Aitken estimator for a system of unrelated regressions and was first introduced by ZELLNER (1962). The second problem with past research is the result of the inadequacy of our models for public goods and collective consumption in general, the decision process underlying public provision of goods and services has not been subjected to comprehensive modeling. 1 1 Some work has begun in this area. See HAEFELE (1970, 1971, 1972) as well as the references he cites. Therefore empirical analyses of expenditure patterns have been based on incompletely developed models. Our approach will be to suggest a model which is representative of the existing literature, sketch its theoretical foundation, and discuss the areas for future research. The present paper will not, however, attempt to develop a more complete model of the public decision process.

Section I of the paper briefly summarizes the primary research efforts in this area. It is followed by an explanation of the model and of the technique used for this study. Section IV presents the results for nine expenditure categories for state and local governments in the U.S. in 1957, 1962, and 1967. The last section summarizes the conclusions of the paper and discusses the scope for further research.  相似文献   

17.
In 1910, the divorce rate per 1000 members of the US population stood at 0.9. 1 1Histroical divorce rates can be found in the Statistical Abstract for the US (1981). . This rate showed a slow upward trend for the next 50 years, and by 1960 had more than doubled to 2.2. It took only 20 years for the rate to more than double again so that by 1980, the rate was 5.3. For the last 20 years, the marriage rate, by contrast, has experienced mild fluctuations between 10.0 and 11.0 per 1000 in the population with no discernible trend. If the same pattern for both rates holds until the year 2000, the annual number of divorces will exceed the annual number of marriages.

Although sociologists have researched divorce extensively, only a few economic studies exist. This is unfortunate since divorce is likely to have considerable impact on economic vaiables such as hours of work, labour force participation, human capital accumulation, work performance and earnings. 2 2For a recent study of hours at work and labour supply, see Green and Quester (1982); for studies on earnings and work performance, see Santos (1975) or Hoffman and Holmes (1976). King (1982) argues that couples anticipating divorce will individually invest more heavily in human capital since the costs of any current investment are at least partially absorbed by the spouse. Without denying the influences of peer groups, social norms and role models, it seems reasonable to suggest that pecuniary considerations may also help to explain divorce.

A search through the economics literature uncovered only two studies of the determinants as opposed to the implications of divorce: one by Orcutt, Caldwell and Wertheimer (1976) and another by Becker, Landes and Michael (BLM) (1977). The study by BLM is by far the most widely cited of the two. The authors of both studies argue that the current state of marriage is the primary determinant of divorce. BLM, for example, assert that ‘the probability of divorce is smaller the greater the expected gain from marriage, and the smaller the variance of the distribution of unanticipated gains from marriage’. BLM, in other words, view marriage as a risky investment with a distribution of returns. The alternative is divorce which, by implication in BLM, involves a certain return.

The first contribution of this study is to draw the implications for an alternative view in which the investment in marraige is certain, but the investment in divorce is risky.

The second contribution lies in presenting formal expected utility-maximizing models of an individual and/or a couple contemplating divorce which can be tested empirically. The thrid contribution is the method developed to test the predictions of the models.

The paper of follows a simple format. Section I presents the models. Section II provides an explanation of the data used in the empirical tests. Methodology and results are presented in sections III and IV. Caveats are observed in section V. The final section closed with a summary of the arguments and evidence.  相似文献   

18.
Throughout the nineties, a number of tender offers occurred in the Portuguese market. This article employs event study methodology to investigate their effects on the involved firms shareholders. On average, these operations increased the market value of the involved firms by 2% to 3%. However, target shareholders appropriated most of this gain, earning 18% over their firms previous value, whereas bidder shareholders seem to have gained nothing. These averages bent in bidders shareholders favour, however, when bidders held significant positions in the targets capital before the bid.Received: December 2002, Accepted: September 2003, JEL Classification: G14, G34This paper corresponds to a revised version of chapter 6 of my PhD dissertation. I have greatly benefited from comments by my supervisors José Manuel Amado da Silva and Victor Mendes dos Santos, Pedro Pita Barros, participants in the 9o Encontro Nacional de Economia Industrial and in an internal seminar at the Faculdade de Economia e Gestão, and two anonymous referees. The responsibility for any remaining errors is, of course, exclusively mine. CMVM and BDP have kindly provided the data used. Grant PRAXIS/PCSH/C/CEG/30/96 partially supported this research.  相似文献   

19.
As the high tariff barriers of the inter-war period have been gradually reduced over the past twenty years, non-tariff factors have taken on an increasingly important role. One of the more notable of these factors is a country's level of excise taxes. Since these taxes are applied to both imports and domestic production, it is obvious that a given percentage change in excise taxes will have a smaller influence on trade than an equal percentage change in tariffs. Nevertheless, excise takes can be used to some extend as a substitution for tariffs. Hence, it would seem desirable to determine the degree to which such substitution will affect the volume of imports.

The role of taxes and tariffs in trade models has been considered in a number of theoretical discussions. 1 1 The effects of commodity taxes on the terms of trade and on domestic welfare have been analysed by MUNDELL, (1960), and by FRIEDLAENDER and VANDENDORPE (1968). However, virtually no effort has been made to examine the relationships among excise taxes, tariffs and imports in order to determine the extent to which countries can use excise taxes as a device to counterbalance the movement toward freer trade through the aegis of G.A.T.T. This study will attempt to rectify this omission.  相似文献   

20.
Summary The paper by C. Ma [1] contains several errors. First, statement and proof of Theorem 2.1 on the existence of intertemporal recursive utility function as a unique solution to the Koopmans equation must be amended. Several additional technical conditions concerning the consumption domain, measurability of certainty equivalent and utility process need to be assumed for the validity of the theorem. Second, the assumptions for Theorem 3.1 need to be amended to include the Feller's condition that, for any bounded continuous functionf C(S × n +), (f(St+1, )¦st =s) is bounded and continuous in (s, ). In addition, for Theorem 3.1, the pricep, the endowmente and the dividend rate as functions of the state variables S are assumed to be continuous.The Feller's condition for Theorem 3.1 is to ensure the value function to be well-defined. This condition needs to be assumed even for the expected additive utility functions (See Lucas [2]). It is noticed that, under this condition, the right hand side of equation (3.5) in [1] defines a bounded continuous function ins and. The proof of Theorem 3.1 remains valid with this remark in place.A correct version of Theorem 2.1 in [1] is stated and proved in this corrigendum. Ozaki and Streufert [3] is the first to cast doubt on the validity of this theorem. They point out correctly that additional conditions to ensure the measurability of the utility process need to be assumed. This condition is identified as conditionCE 4 below. In addition, I notice that, the consumption space is not suitably defined in [1], especially when a unbounded consumption set is assumed. In contrast to what claimed in [3], I show that the uniformly bounded consumption setX and stationary information structure are not necessary for the validity of Theorem 2.1.I would like to thank Hiroyuki Ozaki and Peter Streufert for pointing out correctly some mistakes made in the original article. Comments and suggestions from an anonymous referee are gratefully appreciated. Financially support from SSHRC of Canada is acknowledged.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号