Licensiate theses from Mid Sweden University, (RSS from the library)http://www.bib.miun.se/This is a search for Mid Sweden University Library in DiVA portalhttp://miun.diva-portal.org Britta Andres Paper-based Supercapacitors http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-22410 <p>The growing market of mobile electronic devices, renewable off-grid energy sources and electric vehicles requires high-performance energy storage devices. Rechargeable batteries are usually the first choice due to their high energy density. However, supercapacitors have a higher power density and longer life-time compared to batteries. For some applications supercapacitors are more suitable than batteries. They can also be used to complement batteries in order to extend a battery's life-time. The use of supercapacitors is, however, still limited due to their high costs. Most commercially available supercapacitors contain expensive electrolytes and costly electrode materials.</p><p>In this thesis I will present the concept of cost efficient, paper-based supercapacitors. The idea is to produce supercapacitors with low-cost, green materials and inexpensive production processes. We show that supercapacitor electrodes can be produced by coating graphite on paper. Roll-to-roll techniques known from the paper industry can be employed to facilitate an economic large-scale production. We investigated the influence of paper on the supercapacitor's performance and discussed its role as passive component. Furthermore, we used chemically reduced graphite oxide (CRGO) and a CRGO-gold nanoparticle composite to produce electrodes for supercapacitors. The highest specific capacitance was achieved with the CRGO-gold nanoparticle electrodes. However, materials produced by chemical synthesis and intercalation of nanoparticles are too costly for a large-scale production of inexpensive supercapacitor electrodes. Therefore, we introduced the idea of producing graphene and similar nano-sized materials in a high-pressure homogenizer. Layered materials like graphite can be exfoliated when subjected to high shear forces. In order to form mechanical stable electrodes, binders need to be added. Nanofibrillated cellulose (NFC) can be used as binder to improve the mechanical stability of the porous electrodes. Furthermore, NFC can be prepared in a high-pressure homogenizer and we aim to produce both NFC and graphene simultaneously to obtain a NFC-graphene composite. The addition of 10% NFC in ratio to the amount of graphite, increased the supercapacitor's capacitance, enhanced the dispersion stability of homogenized graphite and improved the mechanical stability of graphite electrodes in both dry and wet conditions. Scanning electron microscope images of the electrode's cross section revealed that NFC changed the internal structure of graphite electrodes depending on the type of graphite used. Thus, we discussed the influence of NFC and the electrode structure on the capacitance of supercapacitors.</p> Mon, 18 Aug 2014 08:30:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-22410 Itai Danielski Energy efficiency of new residential buildings in sweden : Design and Modelling Aspects http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-21933 <p>Energy security and climate change mitigation have been discussed in Sweden since the oil crisis in the 1970s. Sweden has since then increased its share of renewable energy resources to reach the highest level among the EU member states, but is still among the countries with the highest primary energy use per capita. Not least because of that, increasing energy efficiency is important and it is part of the Swedish long term environmental objectives. Large potential for improving energy efficiency can be found in the building sector, mainly in the existing building stock but also in newly constructed buildings</p><p>In this thesis, criteria for energy efficiency in new residential buildings are studied, several design aspects of residential buildings are examined, and possible further analysis from an energy system perspective discussed. Three case studies of existing residential buildings were analysed, including one detached house and multi-storey apartment buildings. The analysis was based on both energy simulations and measurements in residential buildings.</p><p>The results show that the calculated specific final energy demand of residential buildings, before they are built, is too rough an indicator to explicitly steer society toward lower final energy use in the building sector. One of the reasons is assumptions made during calculation before the buildings is built. Another reason is the interior building design. A design that includes relatively large areas of heated corridors, service and storage rooms will lower the specific final energy demand without improving the building energy efficiency, which might increase both the total final energy demand and the use of construction materials in the building sector.</p><p>Efficient thermal envelopes are essential in construction of energy efficient buildings, which include the thermal resistance and also the shape of the building. The shape factor of buildings was found to be an important variable for heat demand in buildings located in temperate and colder climates, particularly if they are exposed to strong winds.</p><p>From a system perspective, energy efficiency measures and the performance of the end use heating technology in buildings should be evaluated together with the energy supply system, including the dynamic interaction between them.</p> Fri, 16 May 2014 13:18:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-21933 Salim Reza Phase-Contrast and Spectroscopic X-ray Imaging for Paperboard Quality Assurance http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-21910 <p>The end-use performance of a paperboard depends on its quality.</p><p>The major properties of a good quality paperboard include consistency</p><p>in the expected ratio between the thickness of the core and</p><p>the coating layers, and the uniformity in the coating layer. Measurement</p><p>systems using X-rays to monitor these properties could assist</p><p>the paperboard industries to assure the quality of their products in a</p><p>non-destructive and automatic manner.</p><p> </p><p>Phase Contrast X-ray Imaging (PCXI) has been used successfully</p><p>to look inside a wide range of objects using synchrotron radiation</p><p>sources. Recent advancements in the grating interferometer based</p><p>PCXI technique enables high quality phase-contrast and dark-field</p><p>images to be obtained using conventional X-ray tubes. The darkfield</p><p>images map the scattering inhomogeneities inside objects and</p><p>is very sensitive to micro-structures, and thus, can reveal useful information</p><p>about the object’s inner structures, such as, the fibre structures</p><p>inside paperboards.</p><p> </p><p>In this thesis, methods, using spectroscopic X-ray imaging and</p><p>PCXI technique have been demonstrated to measure paperboard quality.</p><p>The thicknesses of the core and the coating layers on a paperboard</p><p>with the coating layer on only one side can be measured using</p><p>spectroscopic X-ray imaging technique. However, the limited</p><p>spectral and spatial resolution offered by the measurement system</p><p>being used led to the measured thicknesses of the layers being lower</p><p>than their actual thicknesses in the paperboard sample. Suggestions</p><p>have been made in relation to overcoming these limitations and to</p><p>enhance the performance of the method.</p><p> </p><p>The dark-field signals from paperboard samples with different quality</p><p>indices are analysed. The isotropic and the anisotropic scattering</p><p>coefficients for all of the samples have been calculated. Based</p><p>on the correlation between the isotropic coefficients and the quality</p><p>indices of the paperboards, suggestions have been made for paperboard</p><p>quality measurements.</p> Mon, 12 May 2014 16:11:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-21910 Karin Ahlin Approaching the intangible benefits of a boundary object http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-21576 <p>Today´s information society is constantly increasing the quantity of digital information that organisations have access to and depend on. Despite this dependency, few descriptions exist of the benefits which this digital information can provide the organisation with. Examples of what the organisation can use the information for include business intelligence or in a business process. The absence of such benefit descriptions results in missed opportunities in organisational management and a failure to cultivate the artefact. In terms of a practical operational work role, this means that the artefact just exists and that there are no decisions, communication and discussions connected to it. Earlier research about benefits in the Information Systems field is focused on describing the process of finding benefit factors from different IT investments and how these investments can be measured financially. The result of this was that it was only the measurable benefits that were taken into consideration. Later benefit management research has shown interest in the intangible benefit factors as well and added this as an activity in the evaluation process. Today´s view is that the benefit consists both of tangible and intangible benefit factors. This thesis emphasises benefit factors found by means of qualitative research in organisations producing Technical Information (TI). TI isinformation connected to goods and services and is a part of a product. The intangible benefit factors found which are connected to TI are semantic interoperability and knowledge. Semantic interoperability is beneficial both for the organisation and the individuals – in the first case exemplified by a uniform working process and in the second as efficiency in the internal communication. Knowledge also provides benefit both to the organisation and the individuals – the organisation can operate without depending on certain individuals and information gives the individuals mobility in their profession.The next part in the thesis discusses information management´simpact on benefit factors. In the case of an autocratic approach, it is the organisation that benefits most, whereas a decentralised management style provides the individual co-workers with a greater number of benefit factors. This proves that information management is an important and decisive ingredient, and that it affects benefit factors.One step in the direction of converting the intangible benefit factors into tangible ones is to visualise them. In this work the theoreticallens provided by a boundary object has been used. This lens adds a qualitative view on cross-boundary information and has efficiency approaches. These approaches are the syntactical, semantic andpragmatic. Via interpretations from the thesis´s two empirical cases, those approaches are "measured" by interpretations and visualised by the three leaves of a clover. This gives the opportunity to describewhat information efficiency, in this case connected to a positiveexpectation, can contribute to the organisation or the individuals. By this procedure, different cases or time aspects can be compared,thereby providing a basis for decision-making, communication and discussion. Future research in this area can be made in different directions – one is to investigate whether the intangible benefit factors can be turned into measurable ones. In this way, the internal organisation can be provided with better knowledge of the digital information's impact. Another research direction is to investigate how the passage of time affects the benefit factors that digital information gives the organisation.</p> Wed, 12 Mar 2014 12:52:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-21576 Sinke Henshaw Osong Mechanical Pulp Based Nano-ligno-cellulose : Production, Characterisation and their Effect on Paper Properties http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-21555 <p>Almost all research on biorefinery concepts are based on chemical pulping processes and ways of utilising lignin, hemicelluloses and extractives as well as a part of the remaining cellulose for production of nano materials in order to create more valuable products than today. Within the Forest as a Resource (FORE) research program at FSCN we are utilising the whole chain of unit processes from forestry to final products as paper and board, where the pulping process research focus on high yield process as TMP and CTMP. As these process solutions are preserving or only slightly changing the properties of the original wood polymers and extractives, the idea is to find high value adding products designed by nature.</p><p>From an economic perspective, the production of nanocellulose from a chemical pulp is quite expensive as the pulp has to be either enzymatically (e.g. mono-component endoglucanase) pre-treated or chemically oxidised using the TEMPO (2,2,6,6 - tetramethyl-piperidine-1-oxil) - mediated oxidation method in order to make it possible to disrupt the fibres by means of homogenisation.</p><p>In high yield pulping processes such as in TMP and CTMP, the idea with this study was to investigate the possibility to use fractions of low quality materials from fines fractions for the production of nano-ligno-cellulose (NLC). The integration of a NLC unit process in a high yield pulping production line has a potential to become a future way to improve the quality level of traditional products such as paper and board grades. The intention of this research work was that, by using this concept, a knowledge base can be created so that it becomes possible to develop a low-cost production method for its implementation.</p><p>In order to study the potential of this concept, treatment of thermo-mechanical pulp (TMP) fines fractions were studied by means of homogenisation It seems possible to homogenise fine particles of thermo-mechanical pulp (1% w/v) to NLC. A correspond fines fraction from bleached kraft pulp (BKP) was tested as a reference at 0.5% w/v concentration.</p><p>The objective presented in this work was to develop a methodology for producing mechanical pulp based NLC from fines fractions and to utilise this material as strength additives in paper and board grades. Laboratory sheets of CTMP and BKP, with addition of their respective NLC, were made in a Rapid Köthen sheet former. It was found that handsheets of pulp fibres blended with NLC improved the z-strength and other important mechanical properties for similar sheet densities.</p><p>The characterisation of the particle size distribution of NLC is both important and challenging and the crill methodology developed at Innventia (former STFI) already during the 1980s was tested to see if it would be both fast and reliable enough. The crill measurement technique is based on the optical responses of a micro/nano particle suspension at two wavelengths of light; UV and IR. The crill value of TMP and CTMP based nano-ligno-cellulose were measured as a function of the homogenisation time. Results showed that the crill value of both TMP-NLC and CTMP-NLC correlated with the homogenisation time.</p> Wed, 12 Mar 2014 09:19:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-21555 Klas Palm Understanding Innovation as an Approach to Increasing Customer Value in the Context of the Public Sector http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-23883 <p>We live in a society that is constantly developing. New challenges and new opportunities emerge all the time. Fortunately, human beings have a fantastic ability to adapt and find new solutions in new situations, i.e. to be innovative. Not just individuals but also organizations need to make room for innovative development. Organizations need to work on how to develop new products, services and processes. At the same time, each organization needs to work on improving the quality of existing activities. Previous research has shown that high value for the customer, i.e. that which often constitutes the goal of quality work, is achieved by the organization working in parallel on developing existing products, services and processes while at the same time driving innovative development forward. How organizations cope with the balance between these two perspectives has been researched and written about considerably when it comes to manufacturing companies. On the other hand, however, there is a lack of documented knowledge regarding how best to balance these two perspectives in the service sector in general and the public sector in particular. This thesis has been written with a view to contributing to existing knowledge about how innovation can be understood as a possible way of increasing customer value within the public sector. It seeks to create insight into how innovation is perceived as a phenomenon in order to increase value for the customer and into how innovation work relates to other aspects of current quality practices within the Swedish public sector. It has also been written with a view to contributing greater understanding to how some of the quality movement’s tools can increase innovation capacity in the public sector.</p><p>           </p><p>To fulfil this aim, a literature study and case studies have been performed. The case studies have been performed in Sweden at Lantmäteriet (Swedish Land Survey) and The Swedish International Development Cooperation Agency, (Sida). One of the case studies also included the Swedish Ministry for Foreign Affairs and the Swedish Government. Three research reports have been written between 2012 and 2014, and these form the basis of the thesis.</p><p> </p><p>The research findings give examples of organizations whose quality work focuses closely on systematic measurement and control of the work process and much less on innovatively developing new ways of increase customer value. The findings also show that there are a number of obstacles which the public administrations studied face to combine quality work with a greater ability to work innovatively. Given that innovative development is an important strategy for increasing customer value, the study indicates that some of the existing quality work is an obstacle to achieving greater customer value in the public sector.</p><p> </p><p> </p><p>At the same time, there are tools and values in the quality movement that can improve the organization’s ability to innovate. The quality movement’s core values and tools, such as systematic cyclical learning, can constitute important tools with which to create favourable conditions to improve innovative ability. This underlines the need for identifying where quality work strengthens and hinders innovation processes respectively. The research findings also stress the need to radically improve the work on innovative processes in the public sector in order to achieve the overarching goals of public administration more effectively. </p> Thu, 18 Dec 2014 16:08:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-23883 Susanne Boija On metal ion chelates and conditional stability constant determination : Method development and selective ion flotation of chelating surfactants http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-24029 Fri, 2 Jan 2015 12:51:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-24029 G M Atiqur Rahaman Image analysis approach for modeling color predictions in printing http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-24030 Fri, 2 Jan 2015 12:56:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-24030 Helen Lusth Some aspects on the energy dissipation during canter chipping http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-20952 Fri, 3 Jan 2014 15:11:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-20952 Cecilia Lidenmark Time induced spreading and adhesion of latex polymers http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-20951 Fri, 3 Jan 2014 15:03:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-20951 Niklas Johansson Spectral goniophotometry : applications to light scattering in paper http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-20950 Fri, 3 Jan 2014 13:36:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-20950 Anna Åslund Value creation within societal entrepreneurship : a process perspective http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-20184 <p>Social entrepreneurship is given considerable attention within literature and</p><p>academic research despite that fact it is an area that needs considerable attention</p><p>and research. The main purpose for societal entrepreneurs is to create societal</p><p>value but there can be difficulties to understand value creation within the area.</p><p>Important components within Total Quality Management (TQM) are process</p><p>orientation and value creation. A TQM perspective with processes in focus</p><p>provides opportunities to clarify societal value creation within societal</p><p>entrepreneurship initiatives.</p><p>The main purpose of this thesis has been to explore how societal value is created</p><p>within the area of societal entrepreneurship and the underlying purpose has been</p><p>to contribute to the development of knowledge and understanding about the</p><p>societal entrepreneurship area. In order to fulfil the purpose one literature case</p><p>study and three empirical case studies have been conducted with processes in</p><p>focus. The literature case study was conducted first and it resulted in a theoretical</p><p>process map based on a process perspective, which showed how societal value was</p><p>created within a societal entrepreneurship initiative. After that the three empirical</p><p>case studies were conducted separately and the findings from the empirical case</p><p>studies were compared with the previously developed theoretical process map. A</p><p>cross case analysis was made to find out if the process map could be confirmed,</p><p>developed or rejected.</p><p>The result of the case studies contributes to earlier findings within research and</p><p>gives a common, comprehensive and simplified picture of a complex phenomenon</p><p>and an opportunity to understand how societal value is created. A general overall</p><p>process map is presented that gives a picture of how value is created within the</p><p>area of societal entrepreneurship. The result shows the management process and</p><p>support process fields. The map also shows a main process that is further</p><p>developed with input, output and sub processes. The studies point out that societal</p><p>value is created through processes and that societal value creation can be described</p><p>out of a process orientation perspective. Important components to create societal</p><p>value have been found to be: 'unidentified needs'; 'knowledge about the context';</p><p>'identified need'; 'an idea or a vision'; and some kind of 'organization' and</p><p>important activities to create value seem to be: 'being in the context'; 'analysis of</p><p>knowledge'; 'searching for solution'; 'organize and mobilize'; and 'realize'. Fields</p><p>where support processes are performed that are of importance in societal value</p><p>creation have been identified. Those fields are 'creation of financing opportunities';</p><p>'performance of political decisions and acts'; 'development and use of networks';</p><p>'establishment of initiative'; 'creation of media information'; 'development and use</p><p>of scientific results'; and 'development and use of competence'.</p><p>The map does have potential for development. Further studies need to be done</p><p>within the area concerning how societal value is created and to get an even more</p><p>comprehensive process map of the societal entrepreneurship area but the result</p><p>presented in this thesis is a start to understanding how societal value is created and</p><p>to develop knowledge and understanding of the societal entrepreneurship area.</p> Mon, 11 Nov 2013 15:25:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-20184 Jennie Sandström Phytoplankton response to a changing climate in lakes in northern Sweden http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-20014 <p>In a climate change perspective, increased air temperatures are already a reality and are expected to increase even more in the future, especially in areas at high latitudes. The present thesis therefore addresses the influence of climate change on the physical properties and the phytoplankton communities of typical small and oligotrophic lakes in northern Sweden (62-64˚N). In the first part of the study, we found a significant trend (10 lakes from 1916 to 2010) of ice break-ups occurring increasingly earlier. The timing of ice break-up was strongly influenced by the April air temperature indicating that expected increases in air temperature in the future will also result in an earlier ice break-up. We also used concentrations of chlorophyll a (chl a) as estimations of phytoplankton biomass and discovered a positive relationship between surface water temperature and concentrations of chl a in Lake Remmaren (from 1991 to 2008). The second part of the thesis focuses on climatic conditions and cyanobacteria abundance in three small, oligotrophic lakes in northern Sweden; Lake Remmaren, Lake S. Bergsjön and Lake Gransjön. The concentration and relative abundance of cyanobacteria differ between 2011 and 2012, with different climatic conditions. The "warm" year of 2011 had higher concentrations and relative abundance of cyanobacteria than the "cold" year of 2012. Trends in increasing surface water temperatures as well as increasing abundance of cyanobacteria in August were found in Lake Remmaren (from 1988 to 2011). The direct or indirect effects of warming had a positive effect on the cyanobacteria abundance, since nutrients (Tot N and Tot P) did not display an increasing trend in Lake Remmaren. An analysis on the composition of phytoplankton species in Lake Remmaren, Lake S. Bergsjön and Lake Gransjön revealed that the cyanobacteria Merismopedia sp. was more common in 2011 than 2012. If different cyanobacteria become more common in oligotrophic lakes in the future, the functioning of lake ecosystems may be impacted. Small zooplankton eats small phytoplankton and if smaller phytoplankton species, e.g. cyanobacteria, increase at the expense of other phytoplankton groups, an extra step in the food chain might be added. Less energy might be transferred to the upper levels because many cyanobacteria contain toxic compounds and are less edible than other phytoplankton groups. An increase of toxic containing cyanobacteria in lakes can also make lakes less attractive for recreational purposes in the future.</p> Thu, 17 Oct 2013 12:32:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-20014 Suryanarayana Murthy Muddala View Rendering for 3DTV http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-19194 <p>Advancements in three dimensional (3D) technologies are rapidly increasing. Three Dimensional Television (3DTV) aims at creating 3D experience for the home user. Moreover, multiview autostereoscopic displays provide a depth impression without the requirement for any special glasses and can be viewed from multiple locations. One of the key issues in the 3DTV processing chain is the content generation from the available input data format video plus depth and multiview video plus depth. This data allows for the possibility of producing virtual views using depth-image-based rendering. Although depth-image-based rendering is an efficient method, it is known for appearance of artifacts such as cracks, corona and empty regions in rendered images. While several approaches have tackled the problem, reducing the artifacts in rendered images is still an active field of research.</p><p> </p><p>Two problems are addressed in this thesis in order to achieve a better 3D video quality in the context of view rendering: firstly, how to improve the quality of rendered views using a direct approach (i.e. without applying specific processing steps for each artifact), and secondly, how to fill the large missing areas in a visually plausible manner using neighbouring details from around the missing regions. This</p><p>thesis introduces a new depth-image-based rendering and depth-based texture inpainting in order to address these two problems. The first problem is solved by an edge-aided rendering method that relies on the principles of forward warping and one dimensional interpolation. The other problem is addressed by using the depth-included curvature inpainting method that uses appropriate depth level texture details around disocclusions.</p><p> </p><p>The proposed edge-aided rendering method and depth-included curvature inpainting methods are evaluated and compared with the state-of-the-art methods. The results show an increase in the objective quality and the visual gain over reference methods. The quality gain is encouraging as the edge-aided rendering method omits the specific processing steps to remove the rendering artifacts. Moreover, the results show that large disocclusions can be effectively filled using the depth-included curvature inpainting approach. Overall, the proposed approaches improve the content generation for 3DTV and additionally, for free view point television.</p> Thu, 13 Jun 2013 13:23:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-19194 Mitra Damghanian The Sampling Pattern Cube : A Framework for Representation and Evaluation of Plenoptic Capturing Systems http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-19138 <p>Digital cameras have already entered our everyday life. Rapid technological advances have made it easier and cheaper to develop new cameras with unconventional structures. The plenoptic camera is one of the new devices which can capture the light information which is then able to be processed for applications such as focus adjustments. The high level camera properties, such as the spatial or angular resolution are required to evaluate and compare plenoptic cameras. With complex camera structures that introduce trade-offs between various high level camera properties, it is no longer straightforward to describe and extract these properties. Proper models, methods and metrics with the desired level of details are beneficial to describe and evaluate plenoptic camera properties.</p><p>This thesis attempts to describe and evaluate camera properties using a model based representation of plenoptic capturing systems in favour of a unified language. The SPC model is proposed and it describes which light samples from the scene are captured by the camera system. Light samples in the SPC model carry the ray and focus information of the capturing setup. To demonstrate the capabilities of the introduced model, property extractors for lateral resolution are defined and evaluated. The lateral resolution values obtained from the introduced model are compared with the results from the ray-based model and the ground truth data. The knowledge about how to generate and visualize the proposed model and how to extract the camera properties from the model based representation of the capturing system is collated to form the SPC framework.</p><p>The main outcomes of the thesis can be summarized in the following points: A model based representation of the light sampling behaviour of the plenoptic capturing system is introduced, which incorporates the focus information as well as the ray information. A framework is developed to generate the SPC model and to extract high level properties of the plenoptic capturing system. Results confirm that the SPC model is capable of describing the light sampling behaviour of the capturing system, and that the SPC framework is capable of extracting high level camera properties with a higher descriptive level as compared to the ray-based model. The results from the proposed model compete with those from the more elaborate wave optics model in the ranges that wave nature of the light is not dominant. The outcome of the thesis can benefit design, evaluation and comparison of the complex capturing systems.</p> Tue, 11 Jun 2013 10:55:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-19138 Yun Li Coding of three-dimensional video content : Depth image coding by diffusion http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-19087 <p>Three-dimensional (3D) movies in theaters have become a massive commercial success during recent years, and it is likely that, with the advancement of display technologies and the production of 3D contents, TV broadcasting in 3D will play an important role in home entertainments in the not too distant future. 3D video contents contain at least two views from different perspectives for the left and the right eye of viewers. The amount of coded information is doubled if these views are encoded separately. Moreover, for multi-view displays (i.e. different perspectives of a scene in 3D are presented to the viewer at the same time through different angles), either video streams of all the required views must be transmitted to the receiver, or the displays must synthesize the missing views with a subset of the views. The latter approach has been widely proposed to reduce the amount of data being transmitted. The virtual views can be synthesized by the Depth Image Based Rendering (DIBR) approach from textures and associated depth images. However it is still the case that the amount of information for the textures plus the depths presents a significant challenge for the network transmission capacity. An efficient compression will, therefore, increase the availability of content access and provide a better video quality under the same network capacity constraints.</p><p>In this thesis, the compression of depth images is addressed. These depth images can be assumed as being piece-wise smooth. Starting from the properties of depth images, a novel depth image model based on edges and sparse samples is presented, which may also be utilized for depth image post-processing. Based on this model, a depth image coding scheme that explicitly encodes the locations of depth edges is proposed, and the coding scheme has a scalable structure. Furthermore, a compression scheme for block-based 3D-HEVC is also devised, in which diffusion is used for intra prediction. In addition to the proposed schemes, the thesis illustrates several evaluation methodologies, especially, the subjective test of the stimulus-comparison method. It is suitable for evaluating the quality of two impaired images, as the objective metrics are inaccurate with respect to synthesized views.</p><p>The MPEG test sequences were used for the evaluation. The results showed that virtual views synthesized from post-processed depth images by using the proposed model are better than those synthesized from original depth images. More importantly, the proposed coding schemes using such a model produced better synthesized views than the state of the art schemes. As a result, the outcome of the thesis can lead to a better quality of 3DTV experience.</p> Tue, 11 Jun 2013 11:04:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-19087 Kun Wang Stereoscopic 3D Video Quality of Experience : impact of coding, transmission and display technologies http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-19044 <p>Three-Dimensional (3D) videos are using their success from cinema to home entertainmentmarkets such as TV, DVD, Blu-ray, video games, etc. The video quality isa key factor which decides the success and acceptance of a new service. Visual qualitywill have more severe consequences for 3D than for 2D videos, e.g. eye-strain,headache and nausea.This thesis addresses the stereoscopic 3D video quality of experience that can beinfluenced during the 3D video distribution chain, especially in relation to coding,transmission and display stages. The first part of the thesis concentrates upon the3D video coding and transmission quality over IP based networks. 3D video codingand transmission quality has been studied from the end-users’ point of view byintroducing different 3D video coding techniques, transmission error scenarios anderror concealment strategies. The second part of the thesis addresses the displayquality characterization. Two types of major consumer grade 3D stereoscopic displayswere investigated: glasses with active shutter (SG) technology based display,and those with passive polarization technology (film patterned retarder,FPR) baseddisplay.Themain outcomes can be summarized in three points: firstly the thesis suggeststhat a spatial down-sampling processworking togetherwith high quality video compressingis a efficient means of encoding and transmitting stereoscopic 3D videoswith an acceptable quality of experience. Secondly, this thesis has found that switchingfrom 3D to 2D is currently the best error concealment method for concealingtransmission errors in the 3D videos. Thirdly, this thesis has compared three majorvisual ergonomic parameters of stereoscopic 3D display system: crosstalk, spatialresolution and flicker visibility. The outcomes of the thesis may be of benefit for 3Dvideo industries in order to improve their technologies in relation to delivering abetter 3D quality of experience to customers.</p> Tue, 4 Jun 2013 15:18:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-19044 Sara Rydberg Radiation induced losses in Ytterbium doped laser materials http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-18687 <p>In this work, two types of ytterbium (Yb) doped laser materials are studied, crys-talline Yb:YAG and amorphous Yb/Al silica in both preform and fiber. Yb is suitableas a laser ion because of its simple energy level structure and low quantum defect.The ground and excited energy levels are separated by 10000 cm−1, but the exis-tence of another transition in the ultra violet (UV) region causes problems in a Ybdoped laser. The UV absorption band represents a charge transfer (CT) transitionwhich involves the transition of an electron from a nearby oxygen ion to the Yb ionand changes the Yb ion from trivalent to divalent with the corresponding forma-tion of a hole center. The color center formation causes a permanent optical loss inthe material in the visible to near infrared (NIR) spectral region, which absorbs thepump and laser wavelengths. The output power of the laser is reduced and this isknown as the photodarkening (PD) phenomenon.It is suggested that the excited state absorption of the Yb3+ ion is involved in thetransfer route of NIR photons to the UV range.The increase of Yb2+ upon UV irradiation is shown in both Yb:YAG and theYb/Al silca preform. The existence of Yb2+ luminescence from a photodarkenedfiber is also shown, which proves that PD occurs through a CT process.</p> Thu, 4 Apr 2013 09:57:23 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-18687 David Sundström Optimized Pacing Strategies in Cross-Country Skiing and Time-Trial Road Cycling http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-18634 <p>This thesis is devoted to the analysis and optimization of pacing strategies in cross-country skiing and time-trial road cycling. In locomotive sports, it is well known that variable pacing strategies using changes in the distribution of power output are beneficial when external forces vary along the way. However, there is a lack of research that more in detail investigates the magnitude of power output alteration necessary to optimize performance.A numerical program has been developed in the MATLAB software to simulate cross-country skiing and time-trial road cycling, as well as pacing strategy optimization in these two locomotive sports. The simulations in this thesis are performed by solving equations of motion, where all the main forces acting on the athlete are considered. The motion equations also depend on the course profile, which is expressed as a connected chain of cubical splines.The simulation process is linked to an optimization routine called the Method of Moving Asymptotes (MMA), which strives to minimize the finishing time while altering the power output along the course. To mimic the human energetic system, the optimization is restricted by behavioural and side constraints.Simple constraints like maximum average power output are used for cross-country skiing in Papers I and II. In Paper III a more sophisticated and realistic constraint is used for the power output in time-trial road cycling. It is named the concept of critical power for intermittent exercise and combines the aerobic and anaerobic contributions to power output.In conclusion, this thesis has demonstrated the feasibility of using numerical simulation and optimization in order to optimize pacing strategies in two locomotive sports. The results are clearly showing that these optimized pacing strategies are more beneficial to performance than an even distribution of power output.</p> Mon, 25 Mar 2013 13:00:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-18634 Håkan Hägglund Local optical variations in paper : measurements and analysis http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-18130 Mon, 7 Jan 2013 08:48:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-18130 Jie He GASIFICATION-BASED BIOREFINERY FOR MECHANICAL PULP MILLS http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-17472 <p>The modern concept of "biorefinery" is dominantly based on chemical pulp mills to create more value than cellulose pulp fibres, and energy from the dissolved lignins and hemicelluloses. This concept is characterized by the conversion of biomass into various biobased products. It includes thermochemical processes such as gasification and fast pyrolysis. In mechanical pulp mills, the feedstock available to the gasification-based biorefinery is significant, including logging residues, bark, fibre material rejects, biosludges and other available fuels such as peat, recycled wood, and paper products. This work is to study co-production of bio-automotive fuels, biopower, and steam via gasification in the context of the mechanical pulp industry.</p><p> </p><p>Biomass gasification with steam in a dual-fluidized bed gasifier (DFBG) was simulated with ASPEN Plus. From the model, the yield and composition of the syngas and the contents of tar and char can be calculated. The model has been evaluated against the experimental results measured on a 150 KWth Mid Sweden University (MIUN) DFBG. The model predicts that the content of char transferred from the gasifier to the combustor decreases from 22.5 wt.% of the dry and ash-free biomass at gasification temperature 750 ℃ to 11.5 wt.% at 950 ℃, but is insensitive to the mass ratio of steam to biomass (S/B). The H<sub>2</sub> concentration is higher than that of CO under normal DFBG operating conditions, but they will change positions when the gasification temperature is too high above about 950 ℃, or the S/B ratio is too far below about 0.15. The biomass moisture content is a key parameter for a DFBG to be operated and maintained at a high gasification temperature. The model suggests that it is difficult to keep the gasification temperature above 850 ℃ when the biomass moisture content is higher than 15.0 wt.%. Thus, a certain amount of biomass needs to be added in the combustor to provide sufficient heat for biomass devolatilization and steam reforming. Tar content in the syngas can also be predicted from the model, which shows a decreasing trend of the tar with the gasification temperature and the S/B ratio. The tar content in the syngas decreases significantly with gasification residence time which is a key parameter.</p><p> </p><p>Mechanical pulping processes, as Thermomechanical pulp (TMP), Groundwood (SGW and PGW), and Chemithermomechanical pulp (CTMP) processes have very high wood-to-pulp yields. Producing pulp products by means of these processes is a prerequisite for the production of printing paper and paperboard products due especially to their important functional properties such as printability and stiffness. However, mechanical pulping processes consume a great amount of electricity, which may account for up to 40% of the total pulp production cost. In mechanical pulping mills, wood (biomass) residues are commonly utilized for electricity production through an associated combined heat and power (CHP) plant. This techno-economic evaluation deals with the possibility of utilizing a biomass integrated gasification combined cycle (BIGCC) plant in place of the CHP plant. Integration of a BIGCC plant into a mechanical pulp production line might greatly improve the overall energy efficiency and cost-effectiveness, especially when the flow of biomass (such as branches and tree tops) from the forest is increased. When the fibre material that negatively affects pulp properties is utilized as a bioenergy resource, the overall efficiency of the system is further improved. A TMP+BIGCC mathematic model is developed based on ASPEN Plus. By means of this model, three cases are studied:</p><p> </p><p><strong>1)</strong> adding more forest biomass logging residues in the gasifier,</p><p><strong>2)</strong> adding a reject fraction of low quality pulp fibers to the gasifier, and</p><p><strong>3)</strong> decreasing the TMP-specific electricity consumption (SEC) by up to 50%.</p><p> </p><p>For the TMP+BIGCC mill, the energy supply and consumption are analyzed in comparison with a TMP+CHP mill. The production profit and the internal rate of return (IRR) are calculated. The results quantify the economic benefit from the TMP+BIGCC mill.</p><p> </p><p>Bio-ethanol has received considerable attention as a basic chemical and fuel additive. It is currently produced from sugar/starch materials, but can also be produced from lignocellulosic biomass via a hydrolysis--fermentation or thermo-chemical route. In terms of the thermo-chemical route, a few pilot plants ranging from 0.3 to 67 MW have been built and operated for alcohols synthesis. However, commercial success has not been achieved. In order to realize cost-competitive commercial ethanol production from lignocellulosic biomass through a thermo-chemical pathway, a techno-economic analysis needs to be done.</p><p> </p><p>In this work, a thermo-chemical process is designed, simulated, and optimized mainly with ASPEN Plus. The techno-economic assessment is made in terms of ethanol yield, synthesis selectivity, carbon and CO conversion efficiencies, and ethanol production cost.</p><p> </p><p>Calculated results show that major contributions to the production cost are from biomass feedstock and syngas cleaning. A biomass-to-ethanol plant should be built at around 200 MW. Cost-competitive ethanol production can be realized with efficient equipments, optimized operation, cost-effective syngas cleaning technology, inexpensive raw material with low pretreatment cost, high-performance catalysts, off-gas and methanol recycling, optimal systematic configuration and heat integration, and a high-value byproduct.</p> Fri, 30 Nov 2012 10:52:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-17472 Jonas Hermansson Shift work and cardiovascular disease http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-17466 <p>Shift work is a work schedule being the opposite of normal daytime work, often defined as working time outside normal daytime hours (06:00 to 18:00). In recent years, shift work has been associated with an increased risk of numerous chronic conditions including for example cardiovascular disease, some types of cancer, type II diabetes, and the metabolic syndrome. While some studies on the association between shift work and chronic disease have found results supporting it, others have not. Therefore, more research is needed to clarify potential associations.The aim of this thesis was to further study the proposed association between shift work and cardiovascular disease. This was addressed by performing two studies, one analysing if shift workers had an increased risk of ischemic stroke compared to day workers. The other study analysed whether shift workers had an increased risk of short-term mortality (case fatality) after a myocardial infarction compared to day workers. The studies were performed using logistic regression analysis in two different case-control databasesThe findings from the first study indicated that shift workers did not have an increased risk of ischemic stroke. The findings from the second study showed that male shift workers had an increased risk of death within 28 days after a myocardial infarction; the results did not indicate an increased risk for female shift workers. The results from both studies were adjusted for both behavioural and medical risk factors without affecting the results. The findings from this thesis provide new evidence showing that male shift workers have an increased risk of death 28 days after a myocardial infarction, however more research is needed to clarify and characterise any such potential associations.</p> Wed, 28 Nov 2012 09:07:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-17466 Lina Bellman Auktoriserade fastighetsvärderares syn på värdering : tankemönster om kommersiella fastigheter http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-17177 <p>Värdering av kommersiella fastigheter handlar om att samla in, analysera och bedöma information. Förutom att fastigheternas marknadsvärden har betydelse för samhället i stort är de av vikt för dem som fattar beslut som grundas på värdeutlåtanden. Syftet med denna licentiatavhandling är a) att kartlägga hur svenska auktoriserade fastighetsvärderare ser på de faktorer som avgör värdet på kommersiella fastigheter när värderingen görs inför upprättandet av årsredovisning samt b) att jämföra och dra slutsatser om fastighetsvärderarnas tanke­mönster vad gäller innehåll, komplexitet och homogenitet samt i vilken omfattning tankemönstren skiljer sig åt mellan olika grupper av fastighets­värderare.</p><p>För att kartlägga fastighetsvärderarnas tankemönster använder jag mig av Kellys (1955) gridteknik och kompletterande semistrukturerade intervjuer. Jag har intervjuat nära hälften (67) av Sveriges auktoriserade fastighetsvärderare. Resultatet visar tre tolkningsbara dimensioner som kan anses centrala i fastighetsvärderarnas tankemönster. Den första dimensionen avser värderingens fokus. Den handlar om att fastighetsvärderare uppfattar att olika sorters information och bedömning har olika påverkan på fastighetsvärdering på mikro- respektive makronivå. Med mikronivå menas då fastigheter i relation till deras fastighets­ägare och makronivå avser fastigheter i relation till marknaden i stort. Den andra dimensionen ger uttryck för att fastighetsvärderare uppfattar att viss information är mer eller mindre verifierbar utifrån informationens karaktär. Den tredje dimensionen avser bedömningens komplexitet. Fastighetsvärderare uppfattar att olika typer av information är komplexare respektive enklare att bedöma.</p><p>Resultaten tyder på att fastighetsvärderare har ett flerdimentionellt tankemönster. När de auktoriserade fastighetsvärderarna delas upp i grupper utifrån olika bakgrunds­variabler återkommer de tre dimensionerna i samtliga gruppers tankemönster. Detta tyder även på att auktoriserade fastighetsvärderare har relativt homogena tankestrukturer. Vissa skillnader i komplexitet och homogenitet framkommer dock. Dessa skillnader visar sig främst utifrån de auktoriserade fastighets­­värderarnas verksamhetsorter och vid vilka lärosäten de studerat.</p> Mon, 12 Nov 2012 08:40:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-17177 Bishnu Chandra Poudel Forest biomass production potential and its implications for carbon balance http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-17281 <p>An integrated methodological approach is used to analyse the forest biomass production potential in the Middle Norrland region of Sweden, and its use to reduce carbon emissions. Forest biomass production, forest management, biomass harvest, and forest product use are analyzed in a system perspective considering the entire resource flow chains. The system-wide carbon flows as well as avoided carbon emissions are quantified for the activities of forest biomass production, harvest, use and substitution of non-biomass materials and fossil fuels. Five different forest management scenarios and two biomass use alternatives are developed and used in the analysis. The analysis is divided into four main parts. In the first part, plant biomass production is estimated using principles of plant-physiological processes and soil-water dynamics. Biomass production is compared under different forest management scenarios, some of which include the expected effects of climate change based on IPCC B2 scenario. In the second part, forest harvest potentials are estimated based on plant biomass production data and Swedish national forest inventory data for different forest management alternatives. In the third part, soil carbon stock changes are estimated for different litter input levels from standing biomass and forest residues left in the forest during the harvest operations. The fourth and final part is the estimation of carbon emissions reduction due to the substitution of fossil fuels and carbon-intensive materials by the use of forest biomass. Forest operational activities such as regeneration, pre-commercial thinning, commercial thinning, fertilisation, and harvesting are included in the analysis. The total carbon balance is calculated by summing up the carbon stock changes in the standing biomass, carbon stock changes in the forest soil, forest product carbon stock changes, and the substitution effects. Fossil carbon emissions from forest operational activities are calculated and deducted to calculate the net total carbon balance.The results show that the climate change effect most likely will increase forest biomass production over the next 100 years compared to a situation with unchanged climate. As an effect of increased biomass production, there is a possibility to increase the harvest of usable biomass. The annual forest biomass production and harvest can be further increased by the application of more intensive forestry practices compared to practices currently in use. Deciduous trees are likely to increase their biomass production because of climate change effects whereas spruce biomass is likely to increase because of implementation of intensive forestry practices.IIIntensive forestry practices such as application of pre-commercial thinning, balanced fertilisation, and introduction of fast growing species to replace slow growing pine stands can increase the standing biomass carbon stock. Soil carbon stock increase is higher when only stem-wood biomass is used, compared to whole-tree biomass use. The increase of carbon stocks in wood products depends largely on the magnitude of harvest and the use of the harvested biomass. The biomass substitution benefits are the largest contributor to the total carbon balance, particularly for the intensive forest management scenario when whole-tree biomass is used and substitutes coal fuel and non-wood construction materials. The results show that the climate change effect could provide up to 104 Tg carbon emissions reduction, and intensive forestry practices may further provide up to 132 Tg carbon emissions reduction during the next 100 years in the area studied.This study shows that production forestry can be managed to balance biomass growth and harvest in the long run, so that the forest will maintain its capacity to increase standing biomass carbon and provide continuous harvests. Increasing standing biomass in Swedish managed forest may not be the most effective strategy to mitigate climate change. Storing wood products in building materials delays the carbon emissions into the atmosphere, and the wood material in the buildings can be used as biofuel at the end of a building life-cycle to substitute fossil fuels.These findings show that the forest biomass production potential in the studied area increases with climate change and with the application of intensive forestry practices. Intensive forestry practice has the potential for continuous increased biomass production which, if used to substitute fossil fuels and materials, could contribute significantly to net carbon emissions reductions and help mitigate climate change.</p> Tue, 30 Oct 2012 08:30:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-17281 Omeime Xerviar Esebamen Simulation, Measurement and Analysis of the Response of Electron- and Position Sensitive Detector http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-17239 <p>Different methods exist in relation to probing and investigating thephysical and structural composition of materials especially detectors whoseusage have become an integral part of radiation detection. The use of thescanning electron microscopy is just one of such exploratory methods. Thistechnique uses a focused beam of high-energy electrons to generate a varietyof signals at the surface of the device under investigationThis thesis presents the results derived from signals from electron beamsampleinteractions, revealing information about the different cleanroomfabricated electron detectors used. This information includes the detector’sexternal morphology and texture, surface recombination, fixed oxide chargeand the behavioral characteristic in the form of its position detection accuracyand linearity.An electron detector with a high ionization factor and which has a 10nmSilicon Oxide passivating layer was fabricated. Results from using the scanningelectron microscopy showed that its maximum responsivity wasapproximately 0.25 A/W from a possible 0.27 A/W. In conjunction withsimulations, results also showed the significance of the effect of the minoritycarrier's surface recombination velocity on the responsivity of the detectors.In addition, measurements were conducted to ascertain the performancevariance of these electron detectors with respect to their surfacerecombination velocity and fixed oxide charge when the doping profile isaltered.By incorporating special features on a fabricated duo-lateral positionsensitive detector (PSD), a position sensing resolution of the PSD using theelectron microscopic method was also evaluated. The evaluation showed avery high linearity over two-dimensions for 77% of the PSD’s active area.The results in this thesis offer a significant improvement in electrondetectors for applications such as gas chromatography detection of traceamounts of chemical compounds in a sample as well as applications involvingposition sensitive detection.</p> Thu, 25 Oct 2012 10:50:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-17239 Sebastian Schwarz Depth Map Upscaling for Three-Dimensional Television : The Edge-Weighted Optimization Concept http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-17048 <p>With the recent comeback of three-dimensional (3D) movies to the cinemas, there have been increasing efforts to spread the commercial success of 3D to new markets. The possibility of a 3D experience at home, such as three-dimensional television (3DTV), has generated a great deal of interest within the research and standardization community.</p><p>A central issue for 3DTV is the creation and representation of 3D content. Scene depth information plays a crucial role in all parts of the distribution chain from content capture via transmission to the actual 3D display. This depth information is transmitted in the form of depth maps and is accompanied by corresponding video frames, i.e. for Depth Image Based Rendering (DIBR) view synthesis. Nonetheless, scenarios do exist for which the original spatial resolutions of depth maps and video frames do not match, e.g. sensor driven depth capture or asymmetric 3D video coding. This resolution discrepancy is a problem, since DIBR requires accordance between the video frame and depth map. A considerable amount of research has been conducted into ways to match low-resolution depth maps to high resolution video frames. Many proposed solutions utilize corresponding texture information in the upscaling process, however they mostly fail to review this information for validity.</p><p>In the strive for better 3DTV quality, this thesis presents the Edge-Weighted Optimization Concept (EWOC), a novel texture-guided depth upscaling application that addresses the lack of information validation. EWOC uses edge information from video frames as guidance in the depth upscaling process and, additionally, confirms this information based on the original low resolution depth. Over the course of four publications, EWOC is applied in 3D content creation and distribution. Various guidance sources, such as different color spaces or texture pre-processing, are investigated. An alternative depth compression scheme, based on depth map upscaling, is proposed and extensions for increased visual quality and computational performance are presented in this thesis. EWOC was evaluated and compared with competing approaches, with the main focus was consistently on the visual quality of rendered 3D views. The results show an increase in both objective and subjective visual quality to state-of-the-art depth map upscaling methods. This quality gain motivates the choice of EWOC in applications affected by low resolution depth.</p><p>In the end, EWOC can improve 3D content generation and distribution, enhancing the 3D experience to boost the commercial success of 3DTV.</p> Mon, 22 Oct 2012 16:01:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-17048 Naeem Ahmad Modelling and optimization of sky surveillance visual sensor network http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-17123 <p>A Visual Sensor Network (VSN) is a distributed system of a largenumber of camera sensor nodes. The main components of a camera sensornode are image sensor, embedded processor, wireless transceiver and energysupply. The major difference between a VSN and an ordinary sensor networkis that a VSN generates two dimensional data in the form of an image, whichcan be exploited in many useful applications. Some of the potentialapplication examples of VSNs include environment monitoring, surveillance,structural monitoring, traffic monitoring, and industrial automation.However, the VSNs also raise new challenges. They generate large amount ofdata which require higher processing powers, large bandwidth requirementsand more energy resources but the main constraint is that the VSN nodes arelimited in these resources.This research focuses on the development of a VSN model to track thelarge birds such as Golden Eagle in the sky. The model explores a number ofcamera sensors along with optics such as lens of suitable focal length whichensures a minimum required resolution of a bird, flying at the highestaltitude. The combination of a camera sensor and a lens formulate amonitoring node. The camera node model is used to optimize the placementof the nodes for full coverage of a given area above a required lower altitude.The model also presents the solution to minimize the cost (number of sensornodes) to fully cover a given area between the two required extremes, higherand lower altitudes, in terms of camera sensor, lens focal length, camera nodeplacement and actual number of nodes for sky surveillance.The area covered by a VSN can be increased by increasing the highermonitoring altitude and/or decreasing the lower monitoring altitude.However, it also increases the cost of the VSN. The desirable objective is toincrease the covered area but decrease the cost. This objective is achieved byusing optimization techniques to design a heterogeneous VSN. The core ideais to divide a given monitoring range of altitudes into a number of sub-rangesof altitudes. The sub-ranges of monitoring altitudes are covered by individualsub VSNs, the VSN1 covers the lower sub-range of altitudes, the VSN2 coversthe next higher sub-range of altitudes and so on, such that a minimum cost isused to monitor a given area.To verify the concepts, developed to design the VSN model, and theoptimization techniques to decrease the VSN cost, the measurements areperformed with actual cameras and optics. The laptop machines are used withthe camera nodes as data storage and analysis platforms. The area coverage ismeasured at the desired lower altitude limits of homogeneous as well asheterogeneous VSNs and verified for 100% coverage. Similarly, the minimumresolution is measured at the desired higher altitude limits of homogeneous aswell as heterogeneous VSNs to ensure that the models are able to track thebird at these highest altitudes.</p> Tue, 2 Oct 2012 10:13:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-17123 Petter Stenmark Customer-focused product development : An outdoor industry perspective http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-16708 <p>Being customer‐focused is often considered to be a key success factor inproduct‐ or service development. This kind of approach may comprise manythings in practice, such as formal or informal methods and activities that arecarried out to identify and meet, or preferably exceed, customer needs andexpectations. The overall purpose of this thesis is to contribute to a greaterknowledge about the use and function of methods, activities and tools regardingcustomer‐focused product development in the outdoor industry.The thesis is based on three papers, all related to customer‐focused productdevelopment within the outdoor industry. Two empirical studies have beenconducted. In the first one, the outdoor companies’ own experiences of customerinvolvement in product development are examined. In the second study, the useand function of environmental labels as drivers of attractive quality within theoutdoor industry are explored.A new methodology for customer‐focused product development is alsopresented. It is aimed to be used as a hands‐on support for designing for thesatisfaction of customer needs at different levels in practice, especially those thathave been found to be important in the creation of attractive quality and customerloyalty.</p> Tue, 14 Aug 2012 08:54:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-16708 Abdul Waheed Malik Machine vision architecture on FPGA http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-16698 Fri, 10 Aug 2012 10:36:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-16698 Abdul Majid Analysis and implementation of switch mode power supplies in MHz frequency region http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-16691 Wed, 8 Aug 2012 08:40:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-16691 Gerth Öhman Idé och innovation http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-16513 <p>Present thesis is to deepen the current knowledge about the concept anddevelopment of ideas of innovators and the potential small businesses in ruralareas has to adopt innovation and new technology. The thesis is limited tostudying the process itself and the progression from concept and development ofideas for invention and innovation to market and customer.The research questions include: What motivates an inventor to innovate? Howto practice innovation in small companies? How are innovators approach toproblem solving? What opportunities exist for small businesses in rural areas toadopt innovations and new technologies?The thesis also discusses on the one hand innovators approaches to motivationand attitude and on the other hand, how ideas are selected and then developed inthe process and progression in the author via the invention to market andcustomer innovation. In the present thesis has been both quantitative andqualitative approach, based on behavioural research.Important conclusions from this thesis is: that the innovation process should bedeleted and recorded by the (basic) idea and author; innovator must remain in thedevelopment of ideas with complementary and alternative ideas, innovationprocess, the progression from idea to innovation should be based on a ʺbottom upʺperspective, where the patent law “previous user rights” supports innovatorsinvolvement in the process and progression</p> Wed, 20 Jun 2012 10:08:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-16513 Thomas Öhlund Coated Surfaces for Inkjet-Printed Conductors http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-16449 <p>In this thesis, a number of commercially available paper substrates of various types are characterized and their characteristics related to the performance of inkjet-printed conductors using silver nanoparticle ink. The evaluated performance variables are electrical conductivity as well as the minimum achievable conductor width and the edge raggedness. It is shown that quick absorption of the ink carrier is beneficial for achieving well defined conductor geometry and high conductivity. Surface roughness with topography variations of sufficiently large amplitude and frequency is detrimental to print definition and conductivity. Porosity is another important factor, where the characteristic pore size is much more important than the total pore volume. A nearly ideal porous coating has large total pore volume but small characteristic pore size, preferably smaller than individual nanoparticles in the ink. Apparent surface energy is important for non-absorbing substrates but of limited importance for coatings with a high absorption rate.Additionally, a concept for improving the geometric definition of inkjet-printed conductors on nonporous films has been demonstrated. By coating the films with polymer–based coatings to provide a means of ink solvent removal, minimum conductor width were reduced a factor 2 or more.Intimately connected to the end performance of printed conductors is a well adapted sintering methodology. A comparative evaluation of a number of selective sintering methods has been performed on paper substrates with different heat tolerance. Pulsed high-power white light was found to be a good compromise between conductivity performance, reliability and production adaptability.The purpose of the work conducted in this thesis is to increase the knowledge base in how surface characteristics of papers and flexible films affect performance of printed nanoparticle structures. This would improve selection, adaption of, or manufacturing of such substrates to suit printed high conductivity patterns such as printed antennas for packaging.</p> Thu, 14 Jun 2012 16:28:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-16449 Xiaozhou Meng Maintenance Consideration for Long Life Cycle Embedded System http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-16440 <p>     In this thesis, the work presented is in relation to consideration to the maintenance of a long life cycle embedded system. Various issues can present problems for maintaining a long life cycle embedded system, such as component obsolescence and IP (intellectual property) portability.      For products including automotive, avionics, military application etc., the desired life cycles for these systems are many times longer than the obsolescence cycle for the electronic components used in the systems. The maintainability is analyzed in relation to long life cycle embedded systems for different design technologies. FPGA platform solutions are proposed in order to ease the system maintenance. Different platform cases are evaluated by analyzing the essence of each case and the consequences of different risk scenarios during system maintenance. This has shown that an FPGA platform with a vendor and device independent soft IP has the highest maintainability.A mathematic model of obsolescence management for long life cycle embedded system maintenance is presented. This model can estimate the minimum management costs for the different system architecture and this consists of two parts. The first is to generate a graph in Matlab which is in the form of state transfer diagram. A segments table is then output from Matlab for further optimization. The second part is to find the lowest cost in the state transfer diagram, which can be viewed as a transshipment problem. Linear programming is used to calculate the minimized management cost and schedule, which is solved by Lingo. A simple Controller Area Network (CAN) controller system case study is shown in order to apply this model. The model is validated by a set of synthetic and experimentally selected values. The results provided by this are a minimized management cost and an optimized management time schedule. Test experiments of the maintenance cost responding to the interest rate and unit cost are implemented. The responses from the experiments meet our expectations.      The reuse of predefined IP can shorten development times and assist the designer to meet time-to-market (TTM) requirements. System migration between devices is unavoidable, especially when it has a long life cycle expectation, so IP portability becomes an important issue for system maintenance. An M-JPEG decoder case study is presented in the thesis. The lack of any clear separation between computation and communication is shown to limit the IP’s portability with respect to different communication interfaces. A methodology is proposed to ease the interface modification and interface reuse, thus to increase the portability of an IP. Technology and tool dependent firmware IP components are also shown to limit the IP portability with respect to development tools and FPGA vendors.</p> Thu, 14 Jun 2012 11:31:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-16440 Xin Cheng Hardware centric machine vision for high precision measurement of reference structures in optical navigation http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-16176 Fri, 4 May 2012 10:50:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-16176 Mikael Erdegren Understanding surface defects on direct chill cast 6xxx aluminium billets http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-16175 Fri, 4 May 2012 10:32:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-16175 Dariusz Zasadowski REMOVAL OF LIPOPHILIC EXTRACTIVES AND MANGANESE IONS FROM SPRUCE TMP WATER BY FLOTATION http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-16088 <p>Dissolved and Colloidal substances (DisCo) and metals are released from wood during thermomechanical pulp (TMP) production. The mechanical treatment causes that these components have a tendency to accumulate in process waters, as the water circulation systems in integrated paper mills are being closed. Disturbances such as pitch depositions on the paper machine (pitch problems), specks in the paper, decreased wet and dry strength, interference with cationic process chemicals, and impaired sheet brightness and friction properties appear in the presence of DisCo substances. The presence of transition metal ions such as manganese results in higher consumption of bleaching chemicals (hydrogen peroxide) and lowers the optical quality of the final product, and addition of complexing agents, such as EDTA or DTPA, to prevent this is needed. The never ending trends to decrease water consumption and increase process efficiency in pulp and paper production stress that it is very important both to know the effects of wood substances on pulping and papermaking and to be able to remove them in an efficient way.</p><p>Carried out investigations presented in this thesis show that the lipophilic extractives can be removed from TMP press water to high extent. A 90% decrease in turbidity and a 91% removal of lipophilic extractives from TMP press water can be obtained by addition of a cationic surfactant as foaming agent during flotation. Additionally, fibres located in TMP press water are not removed with the foam fraction but purified. A retained concentration of hydrophilic extractives in the process water indicates that the flotation is selective. Moreover, by introduction of a new recoverable surface active complexing agent, a chelating surfactant, manganese ions in the form of chelates can be successfully removed from the pulp fibres and separated from the process water in the same flotation process.</p><p>iii</p><p>The findings presented above indicate new possibilities for internal water cleaning and decreased emissions to water if flotation technology is applied in an integrated mechanical pulp mill.</p> Tue, 17 Apr 2012 14:32:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-16088 Martin Olsen The mechanics in two nanosized systems : Size effect and threshold field http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-16007 <p>This thesis investigates the mechanics in two nanosized system. Paper I investigates a size effect in a cantilever nanowire affecting its resonance frequency. Paper II reveals a threshold field for the formation of a mound by the diffusion of surface atoms on a substrate under a STM-tip.</p><p>Paper I: Using a one dimensional jellium model and standard beam theory we calculate the spring constant of a vibrating nanowire cantilever. By using the asymptotic energy eigenvalues of the standing electron waves over the nanometer sized cross section area, the change in the grand canonical potential is calculated and hence the force and the spring constant. As the wire bends, more electron states fits in its cross section. This has an impact on the spring ”constant” which oscillates slightly with the bending of the wire. In this way we obtain an amplitude dependent resonance frequency of the oscillations that should be detectable.</p><p>Paper II: By applying a voltage pulse to a scanning tunneling microscope tip, the surface under the tip will be modified. In this paper we have taken a closer look at the model of electric field induced surface diffusion of adatoms including the van der Waals force as a contribution in formations of a mound on a surface. The dipole moment of an adatom is the sum of the surface induced dipole moment (which is constant) and the dipole moment due to electric field polarisation which depends on the strength and polarity of the electric field. The electric field is analytically modelled by a point charge over an infinite conducting flat surface. Based on this we calculate the force that cause adatoms to migrate. The calculated force is small considering the voltage used, typical 1 pN, but due to thermal vibration adatoms are hopping about the surface and even a small net force can be significant in the drift of adatoms. In this way we obtain a novel formula for a polarity dependent thresholdvoltage for mound formation on the surface for positive tip. Knowing the voltage of the pulse, we are then able to calculate the radius of the formed mound. A threshold electric field for mound formation of about 2 V/nm is calculated. In addition, we found that van der Waals force is of importance for shorter distances and its contribution to the radial force on the adatoms has to be considered for distances smaller than 1.5 nm for commonly used voltages.</p> Tue, 8 May 2012 09:36:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-16007 Tatiana Chekalina A value co-creation perspective on the customer-based brand equity model for tourism destinations http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-15991 Thu, 8 Mar 2012 14:30:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-15991 Mohammad Anzar Alam Online Surface Topography Characterization Technique for Paper and Paperboard using Line of Light Triangulation http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-15967 Wed, 29 Feb 2012 16:28:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-15967 Abtin Daghighi The Maximum Principle for Cauchy-Riemann Functions and Hypocomplexity http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-17701 <p>This licentiate thesis contains results on the maximum principle forCauchy–Riemann functions (CR functions) on weakly 1-concave CRmanifolds and hypocomplexity of locally integrable structures. Themaximum principle does not hold true in general for smooth CR functions,and basic counterexamples can be constructed in the presenceof strictly pseudoconvex points. We prove a maximum principle forcontinuous CR functions on smooth weakly 1-concave CR submanifolds.Because weak 1-concavity is also necessary for the maximumprinciple, a consequence is that a smooth generic CR submanifold ofCn obeys the maximum principle for continuous CR functions if andonly if it is weakly 1-concave. The proof is then generalized to embeddedweakly p-concave CR submanifolds of p-complete complexmanifolds. The second part concerns hypocomplexity and hypoanalyticstructures. We give a generalization of a known result regardingautomatic smoothness of solutions to the homogeneous problemfor the tangential CR vector fields given local holomorphic extension.This generalization ensures that a given locally integrable structureis hypocomplex at the origin if and only if it does not allow solutionsnear the origin which cannot be represented by a smooth function nearthe origin.</p> Fri, 14 Dec 2012 09:48:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-17701 Jawad Saleem Power electronics for resistance spot welding equipment http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-16699 <p>Resistance spot welding is an inexpensive and efficient way ofjoining metals. It has extensive applications in household appliancesand in automotive industries. The traditional approach in relation tospot welding machines is to use 50 Hz welding transformers. Thedrawback associated with these transformers is that they are both heavyand bulky. Moreover, the fusing requirements become larger due toincreased welding power.With the development of high power semiconductor switches andDC-DC converter topologies, it is now possible to develop inverterdrive resistance spot welding equipment (RSE) which can be operatedat frequencies higher than the 50Hz frequency. The advantage of usinghigh frequencies is the reduction in the size of the transformer.Moreover, the fusing requirements are relaxed, as the power is sharedbetween three phases.In many industrial applications long welding arms are requiredbetween the transformer and the weld spot, which increases theinductance. The parasitic inductance in welding arms limits themaximum rate of change of the current. In order achieve a higher powerthe current has to be rectified. To rectify a current of the order of tenthof kA is challenging task and is one of the major sources of loss.The full bridge converter topology is used for the inverter drive RSE.The power switches used in the converter are IGBT. In RSE, the DClink capacitors are used to store high energy. In the case of circuitfailure, the stored energy can cause the IGBT device to rupture and inorder to avoid this, a protection scheme is discussed in this work.A controller circuit, using a DSPIC33FJ16GS502 controller, isdeveloped in order to drive a high frequency full bridge converter,which can also be used to drive the IGBTs in the RSE.The secondary side welding current is of the order of kilo amperes.A requirement for the welding control is that the current must be sensedprecisely and in order to fulfill this, a Hall sensor system has beendeveloped. This developed circuit is used in the feed-back control of theRSE. The presence of metallic objects and tools in the vicinity of theHall sensor system can affect its precision. We have estimated theivexclusion distance for the metal objects from the sensor by means of amodel developed in COMSOL Multiphysics software.</p> Fri, 10 Aug 2012 10:48:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-16699 Svensson Sven Utmanande utveckling : om bemanningskonsulters möte med svårförenliga krav http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-15729 Wed, 18 Jan 2012 11:05:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-15729 Maria Kallberg Professional challenges in recordkeeping in Sweden http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-15727 Wed, 18 Jan 2012 10:46:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-15727 Stefan Andersson Low consistency refining of mechanical pulp : process conditions and energy efficiency http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-15406 <p>The thesis is focussed on low consistency (LC) refining of mechanical pulp. Theresearch included evaluations of energy efficiency, development of pulpproperties, the influence of fibre concentration on LC refining and effects of rotorposition in a two-zoned LC refiner.</p><p>Trials were made in mill scale in a modern TMP line equipped with an MSDImpressafiner for chip pre-treatment, double disc (DD) first stage refining and aprototype 72-inch TwinFlo LC refiner in the second stage. Tensile index increasedby 8 Nm/g and fibre length was reduced by 10 % in LC refining at 140 kWh/adtgross specific refining energy and specific edge load 1.0 J/m. Specific lightscattering coefficient did not develop significantly over the LC refiner.</p><p>The above mentioned TMP line was compared with a two stage single disc highconsistency Twin 60 refiner line. The purpose was to evaluate specific energyconsumption and pulp properties. The two different process solutions were testedin mill scale, running similar Norway spruce wood supply. At the same tensileindex and freeness, the specific energy consumption was 400 kWh/adt lower in theDD-LC concept compared with the SD-SD system. Pulp characteristics of the tworefining concepts were compared at tensile index 47 Nm/g. Fibre length was lowerafter DD-LC refining than after SD-SD refining. Specific light scattering coefficientwas higher and shive content much lower for DD-LC pulp.</p><p>The effects of sulphite chip pre-treatment on second stage LC refining were alsoevaluated. No apparent differences in fibre properties after LC refining werenoticed between treated and untreated pulps. Sulphite chip pre-treatment iniiicombination with LC refining in second stage, yielded a pulp without screeningand reject refining with tensile index and shives content that were similar to nonpre-treated final pulp after screening and reject refining.</p><p>A pilot scale study was performed to investigate the influence of fibreconcentration on pulp properties in LC refining of mechanical pulps. MarketCTMP was utilised in all trials and fibre concentrations were controlled by meansof adjustments of the pulp consistency and by screen fractionation of the pulp. Inaddition, various refiner parameters were studied, such as no-load, gap and baredge length. Pulp with the highest fibre concentration supported a larger refinergap than pulp with low fibre concentration at a given gross power input. Fibreshortening was lower and tensile index increase was higher for long fibre enrichedpulp. The results from this study support the interesting concept of combiningmain line LC refining and screening, where screen reject is recycled to the LCrefiner inlet.</p><p>It has been observed that the rotor in two-zoned refiners is not always centred,even though pulp flow rate is equal in both refining zones. This leads to unequalplate gaps, which renders unevenly refined pulp. Trials were performed in millscale, using the 72-inch TwinFlo, to investigate differences in pulp properties androtor positions by means of altering the pressure difference between the refiningzones. In order to produce homogenous pulp, it was found that uneven plate gapscan be compensated for in LC refiners with dual refining zones. Results from thedifferent flow rate adjustments indicated that the control setting with similar plategap gave the most homogenous pulp.</p> Mon, 19 Dec 2011 14:23:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-15406 Patrik Jonsson Intelligent networked sensors for increased traffic safety http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-14982 <p>Our society needs to continuously perform transports of people and goods toensure that business is kept going. Every disturbance in the transportation ofpeople or goods affects the commerce and may result in economical losses forcompanies and society. Severe traffic accidents cause personal tragedies forpeople involved as well as huge costs for the society. Therefore the roadauthorities continuously try to improve the traffic safety. Traffic safety may beimproved by reduced speeds, crash safe cars, tires with better road grip andimproved road maintenance. The environmental effects from roadmaintenance when spreading de-icing chemicals need to be considered, i.e.how much chemicals should be used to maximize traffic safety and minimizethe environmental effects. Knowledge about the current and upcoming roadcondition can improve the road maintenance and hence improve traffic safety.This thesis deals with sensors and models that give information about the roadcondition.The performance and reliability of existing surface mounted sensors wereexamined by laboratory experiments. Further research involved field studies tocollect data used to develop surface status models based on road weather dataand camera images. Field studies have also been performed to find best usageof non intrusive IR technology.The research presented here showed that no single sensor give enoughinformation by itself to safely describe the road condition. However, the resultsindicated that among the traditional road surface mounted sensors only theactive freezing point sensor gave reliable freezing point results. Furtherresearch aimed to find a model that could classify the road condition indifferent road classes from existing road weather sensor data and road images.The result was a model that accurately could distinguish between the roadconditions dry, wet, snowy and icy. These road conditions are clearly dissimilarand are therefore used as the definition of the road classes used in this thesis.Finally, results from research regarding remote sensing IR technology showedthat it significantly improves knowledge of the road temperature and statuscompared to data from surface mounted sensors.</p> Wed, 30 Nov 2011 09:11:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-14982 Proscovia Svärd The Interface Between Enterprise Content Management and Records Management in Changing Organizations http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-14702 <p>The increased demand from citizens for efficient service delivery from public sector organizations has implications for the information that underpins those services. Robust and effective information management is required. Information is looked upon as a resource that can give organizations a competitive edge if it is well leveraged. To address the need for more services and for more efficient service delivery, the Swedish government has promoted e-government initiatives and the two municipalities that are the subjects of this research have responded by engaging in e-service development and provision. e-Government has at its core the use of information and communication technology (ICT).  The municipalities have embarked on the analysis and automation of their business processes and hence the use of information systems. </p><p>Web-based technologies have created a two-way communication flow which has generated complex information for the municipalities to address. This development calls for stronger information and records management regimes. Enterprise Content Management is a new information management construct proposed to help organizations to deal with all their information resources. It promotes enterprise-wide information management. There is, however, little knowledge and understanding of ECM in the Swedish public sector. Further, how e-government developments have affected the management of information is an issue that has not been explored. Traditionally Swedish public authorities have employed records management to address the challenges of managing information. Records management has been used for the effective and systematic capture of records and the maintenance of their reliability and authenticity. While information helps with the daily running of business activities, records carry the evidentiary value of the interactions between the citizens and the municipalities. This research critically examines the interface between Enterprise Content Management (ECM) and records management as information/records management approaches. This has meant examining what the similarities and the differences between the two approaches are.  The research instrumentally used the lens of the Records Continuum Model (RCM), which promotes the management of the entire records’ continuum, a proactive approach, combines the management of archives and records and supports the pluralisation of the captured records. The research further highlights the information management challenges that the municipalities are facing as they engage in e-government developments. </p><p><strong> </strong></p><p><strong>Keywords:</strong> Enterprise Content Management, Records Management, E-government, Long-term Preservation, Business Process Management, Enterprise Architecture.</p> Mon, 28 Nov 2011 09:47:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-14702 Felix Dobslaw Automatic Instance-based Tailoring of Parameter Settings for Metaheuristics http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-14613 <p>Many industrial problems in various fields, such as logistics, process management, orproduct design, can be formalized and expressed as optimization problems in order tomake them solvable by optimization algorithms. However, solvers that guarantee thefinding of optimal solutions (complete) can in practice be unacceptably slow. Thisis one of the reasons why approximative (incomplete) algorithms, producing near-optimal solutions under restrictions (most dominant time), are of vital importance.</p><p>Those approximative algorithms go under the umbrella term metaheuristics, each of which is more or less suitable for particular optimization problems. These algorithmsare flexible solvers that only require a representation for solutions and an evaluation function when searching the solution space for optimality.What all metaheuristics have in common is that their search is guided by certain control parameters. These parameters have to be manually set by the user andare generally problem and interdependent: A setting producing near-optimal resultsfor one problem is likely to perform worse for another. Automating the parameter setting process in a sophisticated, computationally cheap, and statistically reliable way is challenging and a significant amount of attention in the artificial intelligence and operational research communities. This activity has not yet produced any major breakthroughs concerning the utilization of problem instance knowledge or the employment of dynamic algorithm configuration.</p><p>The thesis promotes automated parameter optimization with reference to the inverse impact of problem instance diversity on the quality of parameter settings with respect to instance-algorithm pairs. It further emphasizes the similarities between static and dynamic algorithm configuration and related problems in order to show how they relate to each other. It further proposes two frameworks for instance-based algorithm configuration and evaluates the experimental results. The first is a recommender system for static configurations, combining experimental design and machine learning. The second framework can be used for static or dynamic configuration,taking advantage of the iterative nature of population-based algorithms, which is a very important sub-class of metaheuristics.</p><p>A straightforward implementation of framework one did not result in the expected improvements, supposedly because of pre-stabilization issues. The second approach shows competitive results in the scenario when compared to a state-of-the-art model-free configurator, reducing the training time by in excess of two orders of magnitude.</p> Mon, 17 Oct 2011 15:03:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-14613 Jamie Walters Ripples Across The Internet of Things : Context Metrics as Vehicles forRelational Self-Organization http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-14426 <p>The current paradigm shift in computing has placed mobile computation at the centre of focus. Users are now even more connected; demanding everything everywhere services. These services, such as social networking and media, benefit from the availability of context information seamlessly gathered and shared; providing customized and user-centric experiences. The distribution of context information no longer conforms to the paradigms of the existing Internet with regards to heterogeneity, connectivity and availability. This mandates new approaches towards its organization and provisioning in support of dependent applications and services.</p><p>In response to these developments, the work summarized in this thesis addresses the fundamental problem of presenting context information in organized models as relevant subsets of global information. In approaching this problem, I introduced a distributed collection of context objects that can be arranged into simple relevant subsets called context schemata and presented to applications and services in supporting the realization of context based user experiences. Acknowledging the dynamic behaviour inherent of the real world interactions, I introduced an algorithm for measuring the proximities and similarities among these context objects, providing a metric through which to achieve organization. Additionally, I provided a means of ranking heterogeneous and distributed sensors in response to real time interaction between users and their digital ecosystem. Ranking provides an additional metric with which to achieve organization or identifying important and reputable information sources. The work I present here, additionally details my approach to realizing this complete behaviour an a distributed overlay, exploiting its properties for distribution, persistence and messaging. The overlay is also utilized for the provisioning of the supporting context information.</p><p>Improvements in the ability to discover and attach new context information sources is fundamental to the ability to continually maintain expressions of context, derived from heterogeneous and disparate sources. By being able to create relevant subsets of organized data related to the requirements of applications and services in an end-point, infrastructures are realized for connecting and supporting the increasingly large numbers of users and their sources of information. Coupled with the distribution, these infrastructures realize improvements with regards to the effort required to achieve the same results. The culmination of the work presented in this thesis is an effort to enable seamless context-centric solutions on a future Internet of Things and thus constituting an adequate solution to the challenges raised above.</p> Thu, 1 Sep 2011 15:10:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-14426 Marie Cronskär The use of additive manufacturing in the custom design of orthopedic implants http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-14390 Wed, 24 Aug 2011 13:24:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-14390 Muhammad Imran Investigation of Architectures for Wireless Visual Sensor Nodes http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-14388 <p>Wireless visual sensor network is an emerging field which has proveduseful in many applications, including industrial control and monitoring,surveillance, environmental monitoring, personal care and the virtual world.Traditional imaging systems used a wired link, centralized network, highprocessing capabilities, unlimited storage and power source. In manyapplications, the wired solution results in high installation and maintenancecosts. However, a wireless solution is the preferred choice as it offers lessmaintenance, infrastructure costs and greater scalability.The technological developments in image sensors, wirelesscommunication and processing platforms have paved the way for smartcamera networks usually referred to as Wireless Visual Sensor Networks(WVSNs). WVSNs consist of a number of Visual Sensor Nodes (VSNs)deployed over a large geographical area. The smart cameras can performcomplex vision tasks using limited resources such as batteries or alternativeenergy sources, embedded platforms, a wireless link and a small memory.Current research in WVSNs is focused on reducing the energyconsumption of the node so as to maximise the life of the VSN. To meet thischallenge, different software and hardware solutions are presented in theliterature for the implementation of VSNs.The focus in this thesis is on the exploration of energy efficientreconfigurable architectures for VSNs by partitioning vision tasks on software,hardware platforms and locality. For any application, some of the vision taskscan be performed on the sensor node after which data is sent over the wirelesslink to the server where the remaining vision tasks are performed. Similarly,at the VSN, vision tasks can be partitioned on software and the hardwareplatforms.In the thesis, all possible strategies are explored, by partitioning visiontasks on the sensor node and on the server. The energy consumption of thesensor node is evaluated for different strategies on software platform. It isobserved that performing some of the vision tasks on the sensor node andsending compressed images to the server where the remaining vision tasks areperformed, will have lower energy consumption.In order to achieve better performance and low power consumption,Field Programmable Gate Arrays (FPGAs) are introduced for theimplementation of the sensor node. The strategies with reasonable designtimes and costs are implemented on hardware-software platform. Based onthe implementation of the VSN on the FPGA together with micro-controller,the lifetime of the VSN is predicted using the measured energy values of theplatforms for different processing strategies. The implementation resultsprove our analysis that a VSN with such characteristics will result in a longerlife time.</p> Wed, 24 Aug 2011 10:41:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-14388 Anna-Karin Westman Samtal om begreppskartor : en väg till ökad förståelse http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-14387 <p>Denna avhandling belyser hur elevdiskussioner om begreppskartor kan bidra till en ökad förståelse för innebörden av de utvalda biologiska begreppen hos de elever som deltar. Som bakgrund till avhandlingen redovisas forskningsresultat vilka visar ett antal svårigheter elever kan ha inom det cellbiologiska ämnesområdet. Två exempel på detta är de många ämnesspecifika begreppen och att det i cellbiologin finns flera organisationsnivåer. En viktig orsak till dessa svårigheter är naturvetenskapens karaktär, där konkreta fenomen förklaras med abstrakta modeller och teorier. I bakgrunden finns också tidigare forskning som visat att elevers diskussioner med varandra har positiv inverkan på lärandet. Syftet med studierna är att undersöka vilken typ av samtal som uppkommer i en elevgrupp vid konstruktionen av en begreppskarta och vad deltagarna tyckte om uppgiften.Resultat från två delstudier redovisas och diskuteras i avhandlingen. Den första av dessa studier redovisas i Artikel I. Insamlingen av data gjordes då eleverna genomförde en uppgift där de, under diskussioner i små grupper, konstruerade begreppskartor inom ämnet cellandning. Eleverna gick i en avgångsklass på det naturvetenskapliga programmet. Ytterligare data samlades in vid enskilda intervjuer med elever efter genomförandet. Analysen inriktades på i vilken utsträckning uttalanden i diskussionerna var biologiskt korrekta, vilken typ av samtal grupperna genomförde och hur deltagarna upplevde uppgiften. Resultaten visar att många uttalanden är helt eller delvis korrekta. Samtalen innehöll delar där eleverna enbart samtyckte till vad någon annan sagt, men också delar där deltagarna argumenterade för sitt synsätt. Eleverna uttryckte en positiv upplevelse av uppgiften i de efterföljande intervjuerna.Den andra studien redovisas i Artikel II. Studien gjordes i en klass som studerade genetik. Även här diskuterade elevgrupper hur en begreppskarta skulle konstrueras. Data insamlades under diskussioner och vid efterföljande, enskildavintervjuer. Analysen inriktades på tidigare kända svårigheter i genetik, det naturvetenskapliga innehållet i samtalen, samtalens karaktär och hur eleverna upplevde uppgiften. Resultaten visar att elevernas diskussioner utvecklas mot det naturvetenskapliga synsättet genom samtal där de flesta eleverna deltar aktivt och också uttrycker egna synpunkter. I intervjuerna uttrycker eleverna sig positivt om arbetsuppgiften och flera anser att de fått en bättre förståelse av ämnet.Sammanfattningsvis visar resultaten i avhandlingen att samtal om begreppskartor kan bidra till en förbättrad förståelse i cellbiologi.</p> Wed, 24 Aug 2011 08:32:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-14387 Radhika Ambatipudi Multilayered Coreless Printed Circuit Board (PCB) Step-down Transformers for High Frequency Switch Mode Power Supplies (SMPS) http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-13967 <p>The Power Supply Unit (PSU) plays a vital role in almost all electronic equipment. The continuous efforts applied to the improvement of semiconductor devices such as MOSFETS, diodes, controllers and MOSFET drivers have led to the increased switching speeds of power supplies. By increasing the switching frequency of the converter, the size of passive elements such as inductors, transformers and capacitors can be reduced. Hence, the high frequency transformer has become the backbone in isolated AC/DC and DC/DC converters. The main features of transformers are to provide isolation for safety purpose, multiple outputs such as in telecom applications, to build step down/step up converters and so on. The core based transformers, when operated at higher frequencies, do have limitations such as core losses which are proportional to the operating frequency. Even though the core materials are available in a few MHz frequency regions, because of the copper losses in the windings of the transformers those which are commercially available were limited from a few hundred kHz to 1MHz. The skin and proximity effects because of induced eddy currents act as major drawbacks while operating these transformers at higher frequencies. Therefore, it is necessary to mitigate these core losses, skin and proximity effects while operating the transformers at very high frequencies. This can be achieved by eliminating the magnetic cores of transformers and by introducing a proper winding structure.</p><p>A new multi-layered coreless printed circuit board (PCB) step down transformer for power transfer applications has been designed and this maintains the advantages offered by existing core based transformers such as, high voltage gain, high coupling coefficient, sufficient input impedance and high energy efficiency with the assistance of a resonant technique. In addition, different winding structures have been studied and analysed for higher step down ratios in order to reduce copper losses in the windings and to achieve a higher coupling coefficient. The advantage of increasing the layer for the given power transfer application in terms of the coupling coefficient, resistance and energy efficiency has been reported. The maximum energy efficiency of the designed three layered transformers was found to be within the range of 90%-97% for power transfer applications operated in a few MHz frequency regions. The designed multi-layered coreless PCB transformers for given power applications of 8, 15 and 30W show that the volume reduction of approximately 40-90% is possible when compared to its existing core based counterparts. The estimation of EMI emissions from the designed transformers proves that the amount of radiated EMI from a three layered transformer is less than that of the two layered transformer because of the decreased radius for the same amount of inductance.</p><p>Multi-layered coreless PCB gate drive transformers were designed for signal transfer applications and have successfully driven the double ended topologies such as the half bridge, the two switch flyback converter and resonant converters with low gate drive power consumption of about half a watt. The performance characteristics of these transformers have also been evaluated using the high frequency magnetic material made up of NiZn and operated in the 2-4MHz frequency region.</p><p>These multi-layered coreless PCB power and signal transformers together with the latest semiconductor switching devices such as SiC and GaN MOSFETs and the SiC schottky diode are an excellent choice for the next generation compact SMPS.</p> Mon, 13 Jun 2011 15:10:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-13967 Stefan Forsström Enabling Adaptive Context Views for Mobile Applications : Negotiating Global and Dynamic Sensor Information http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-13919 <p>Mobile devices with Internet access and large amounts of sensors, pushes the development of intelligent services towards new forms of pervasive applications. These applications are made context-aware by utilizing information from sensors and hence the context of a situation, in order to provide a better service. Based on this, the focus of this thesis is on the challenge of creating context awareness in mobile applications. That both utilizes dynamic context information from globally available sensors and provides adaptive views of relevant context information to applications.</p><p>The first challenge is to identify the properties of an architecture that provides scalable access to information from global sensors within bounded time, because existing systems do not support these properties in a satisfactory manner. The majority of related systems employ a centralized approach with limited support for global sensor information due to poor scalability. Therefore, this thesis proposes a distributed architecture capable of exchanging context between users and entities on a peer-to-peer overlay. Pervasive applications can thus utilize global sensor information in a scalable and manageable way within predictable time bounds.</p><p>The second challenge to support continually changing and evolving context information, while providing it as both adaptive and manageable views to applications. To address this particular problem, this thesis proposes the usage of a locally stored evolving context object called a context schema. In detail, this schema contains all context information that is considered as being relevant for a specific user or entity. Furthermore, this thesis proposes an application interface that can provide snapshots of the evolving context schemas as adaptive views. These views can then be used in context-aware mobile applications, without inducing unnecessary delays.</p><p>By successfully addressing the challenges, this thesis enables the creation of pervasive and adaptive applications that utilize evolving context in mobile environments. These capabilities are made possible by enabling access to global sensor information based on a distributed context exchange overlay, in combination with evolving context schemas offered as views through an application interface. In support of these claims, this thesis has developed numerous proof-of-concept applications and prototypes to verify the approach. Hence, this thesis concludes that the proposed approach with evolving context information has the ability to scale in a satisfactory manner and also has the ability to dynamically offer relevant views to applications in a manageable way.</p> Fri, 3 Jun 2011 15:36:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-13919 Anna Lundberg Ink-paper interactions and effect on print quality in inkjet printing http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-13373 <p>This thesis concerns paper and ink interactions related to inkjet printing. The</p><p>main purpose of this work was to gain a deeper understanding in which</p><p>parameters control the flow of ink into papers and how the ink interacts with the</p><p>paper surface. The overall objective was to find key parameters to optimize the</p><p>print quality in inkjet printing. Characterization of paper-surfaces in terms of porosity, surface roughness and</p><p>surface energy was made. Objective and subjective measurements were used for</p><p>print quality evaluation. Light microscopy imaging and SEM was used to see how</p><p>ink interacts with the paper surface in a printed image. A high speed camera was</p><p>used to study the absorption of picolitre sized inkjet droplets into fine papers.</p><p>An initial study on the effect of paper properties on print quality was made.</p><p>Result indicated that there were small differences in print quality for pilot papers</p><p>with different composition (in a specific parameter window) and the commercial</p><p>paper COLORLOK® reproduced a noticeable high colour gamut compared to the</p><p>other samples.Research was made to see how surface fixation can affect the print quality for</p><p>printouts made with pigmented ink. Surface fixation promotes retention of the</p><p>pigmented colorant in the outermost surface layer of the paper and has been</p><p>denoted “colorant fixation” in this thesis.</p><p>It was shown that applying colorant fixation onto a paper surface before</p><p>printing can increase the detail reproduction in a printed image. Different</p><p>concentrations of calcium chloride were applied onto the paper surface on fullscale</p><p>produced non-commercial papers. Test printing was made with a SoHo</p><p>(Small office/Home office) printer using pigmented ink and results showed that</p><p>using calcium chloride as surface treatment can lead to aggregation of pigments at</p><p>the surface resulting in a higher detail reproduction.</p><p>Fast absorption of the carrier liquid into the paper and fast fixation of</p><p>colourants on the surface is important in inkjet printing to avoid colour to colour</p><p>bleeding. These demands will be more pronounced when the printing speed</p><p>increases. It is important to understand which parameters affect the absorption process to</p><p>be able to control the mechanisms and to optimize the print quality.</p><p>A study of absorption of picolitre size inkjet droplets into fine paper was made</p><p>in this work. Theoretical equations describing fluid absorption into capillaries were</p><p>tested and compared with experimental results. The result showed that the time</p><p>dependence in the Lucas-Washburn (L-W) equation fits fairly well to data whereas</p><p>the L-W equation overestimates the penetration depth.</p><p>The results are directly applicable to paper and printing industry and can be</p><p>used as a base for future studies of absorption of picolitre sized droplets into</p><p>porous materials and for studies of aggregation of colloidal particles on surfaces.</p> Fri, 18 Mar 2011 09:01:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-13373 Kannan Thiagarajan Tight-binding calculations of electron scattering rates in semiconducting zigzag carbon nanotubes http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-13162 <p>The technological interest in a material depends very much on its electrical, magnetic, optical and/or mechanical properties. In carbon nanotubes the atoms form a cylindrical structure with a diameter of the order 1 nm, but the nanotubes can be up to several hundred micrometers in length. This makes carbon nanotubes a remarkable model for one-dimensional systems. A lot of efforts have been dedicated to manufacturing carbon nanotubes, which is expected to be the material for the next generation of devices. Despite all the attention that carbon nanotubes have received from the scientific community, only rather limited progress has been made in the theoretical understanding of their physical properties. In this work, we attempt to provide an understanding of the electron-phonon and electron-defect interactions in semiconducting zigzag carbon nanotubes using a tight-binding approach. The electronic energy dispersion relations are calculated by applying the zone-folding technique to the dispersion relations of graphene. A fourth-nearest-neighbour force constant model has been applied to study the vibrational modes in the carbon nanotubes. Both the electron-phonon interaction and the electron-defect interaction are formulated within the tight-binding approximation, and analyzed in terms of their quantum mechanical scattering rates. Apart from the scattering rates, their components in terms of phonon absorption, phonon emission, backscattering and forward scattering have been determined and analyzed. The scattering rates for (5,0), (7,0), (10,0), (13,0) and (25,0) carbon nanotubes at room temperature and at 10K are presented and discussed. The phonon scattering rate is dependent on the lattice temperature in the interval 0-0.17 eV. We find that backscattering and phonon emission are dominant over forward scattering and phonon absorption in most of the energy interval. However, forward scattering and phonon absorption can be comparable to backscattering and phonon emission in limited energy intervals. The phonon modes associated with each peak in the electron-phonon scattering rates have been identified, and the similarities in the phonon scattering rates between different nanotubes are discussed. The dependence of the defect scattering rate on the tube diameter is similar to that of the phonon scattering rate. Both the phonon and the defect scattering rates show strong dependence on the tube diameter (i.e., the scattering rate decreases as a function of the index of the nanotube). It is observed that the backscattering and forward scattering for electrons interacting with defects occur with same frequency at all energies, in sharp contrast to the situation for phonon scattering. It is demonstrated that the differences in the scattering rate between different tubes are mainly due to the differences in their band structures.</p><p> </p> Fri, 28 Jan 2011 08:52:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-13162 Magnus Neuman Angle Resolved Light Scattering in Turbid Media : Analysis and Applications http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-13154 <p>Light scattering in turbid media is essential for such diverse application areas as paper and print, computer rendering, optical tomography, astrophysics and remote sensing. This thesis investigates angular variations of light reflected from plane-parallel turbid media using both mathematical models and reflectance measurements, and deals with several applications. The model of most widespread use in industry is the Kubelka-Munk model, which neglects angular variations in the reflected light. This thesis employs a numerical solution of the angle resolved radiative transfer problem to better understand how the angular variations are related to medium properties. It is found that the light is reflected anisotropically from all media encountered in practice, and that the angular variations depend on the medium absorption and transmittance and on the angular distribution of the incident light. If near-surface bulk scattering dominates, as in strongly absorbing or highly transmitting media or obliquely illuminated media, relatively more light is reflected in large polar (grazing) angles. These results are confirmed by measurements using a set of paper samples. The only situation with isotropic reflectance is when a non-transmitting, non-absorbing medium is illuminated diffusely. This is the only situation where the Kubelka-Munk model is exactly valid. The results also show that there is no such thing as an ideal bulk scattering diffusor, and these findings can affect calibration and measurement procedures defined in international standards.The implications of the presented results are studied for a set of applications including reflectance measurements, angle resolved color and point spreading. It is seen that differences in instrument detection and illumination geometry can result in measurement differences. The differences are small and if other sources of error - such as fluorescence and gloss - are not eliminated, the differences related to instrument geometry become difficult to discern. Furthermore, the angle resolved color of a set of paper samples is assessed both theoretically and experimentally. The chroma decreases and the lightness increases as the observation polar angle increases. The observed differences are clearly large, and it is an open issue how angle resolved color should be handled. Finally, the dependence of point spreading in turbid media on the medium parameters is studied. The asymmetry factor is varied while maintaining constant the optical response in a standardized measurement geometry. It is seen that the point spreading increases as forward scattering becomes more dominant, and that the effect is larger if the medium is low-absorbing with large mean free path. A generic model of point spreading must therefore capture the dependence on all of these medium parameters.This thesis shows that turbid media reflect light anisotropically, and angle resolved radiative transfer models are therefore necessary to capture this. Using simplified models can introduce errors in an uncontrolled manner. The results presented potentially have consequences for all applications dealing with light scattering, some of which are studied here.</p> Tue, 25 Jan 2011 15:15:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-13154 Sebastian Bader Enabling autonomous envionmental measurement systems with low-power wireless sensor networks http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-12982 <p>Wireless Sensor Networks appear as a technology, which provides the basisfor a broad field of applications, drawing interest in various areas. On theone hand, they appear to allow the next step in computer networks, buildinglarge collections of simple objects, exchanging information with respect totheir environment or their own state. On the other hand, their ability tosense and communicate without a fixed physical infrastructure makes theman attractive technology to be used for measurement systems.Although the interest inWireless Sensor Network research is increasing,and new concepts and applications are being demonstrated, several fundamentalissues remain unsolved. While many of these issues do not requireto be solved for proof-of-concept designs, they are important issues to beaddressed when referring to the long-term operation of these systems. Oneof these issues is the system’s lifetime, which relates to the lifetime of thenodes, upon which the system is composed.This thesis focuses on node lifetime extension based on energy management.While some constraints and results might hold true from a moregeneral perspective, the main application target involves environmental measurementsystems based onWireless Sensor Networks. Lifetime extensionpossibilities, which are the result of application characteristics, by (i) reducingenergy consumption and (ii) utilizing energy harvesting are to be presented.For energy consumption, we show howprecise task scheduling due to nodesynchronization, combined with methods such as duty cycling and powerdomains, can optimize the overall energy use. With reference to the energysupply, the focus lies on solar-based solutions with special attentionplaced on their feasibility at locations with limited solar radiation. Furtherdimensioning of these systems is addressed.It will be shown, that for the presented application scenarios, near-perpetualnode lifetime can be obtained. This is achieved by focusing on efficient resourceusage and by means of a carefully designed energy supply.</p> Fri, 14 Jan 2011 13:58:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-12982 Hari Babu Kotte High Speed (MHz) Switch Mode Power Supplies (SMPS) using Coreless PCB Transformer Technology http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-13964 <p>The most essential unit required for all the electronic devices is the Power Supply Unit (PSU). The main objective of power supply designers is to reduce the size, cost and weight, and to increase the power density of the converter. There is also a requirement to have a lower loss in the circuit and hence in the improvement of energy efficiency of the converter circuit. Operating the converter circuits at higher switching frequencies reduces the size of the passive components such as transformers, inductors, and capacitors, which results in a compact size, weight, and increased power density of the converter. At present the switching frequency of the converter circuit is limited due to the increased switching losses in the existing semiconductor devices and in the magnetic area, because of increased hysteresis and eddy current loss in the core based transformer. Based on continuous efforts to improve the new semi conductor materials such as GaN/SiC and with recently developed high frequency multi-layered coreless PCB step down power transformers, it is now feasible to design ultra-low profile, high power density isolated DC/DC and AC/DC power converters. This thesis is focussed on the design, analysis and evaluation of the converters operating in the MHz frequency region with the latest semi conductor devices and multi-layered coreless PCB step-down power and signal transformers.</p><p>An isolated flyback DC-DC converter operated in the MHz frequency with multi-layered coreless PCB step down 2:1 power transformer has been designed and evaluated. Soft switching techniques have been incorporated in order to reduce the switching loss of the circuit. The flyback converter has been successfully tested up to a power level of 10W, in the switching frequency range of 2.7-4 MHz. The energy efficiency of the quasi resonant flyback converter was found to be in the range of 72-84% under zero voltage switching conditions (ZVS). The output voltage of the converter was regulated by implementing the constant off-time frequency modulation technique.</p><p>Because of the theoretical limitations of the Si material MOSFETs, new materials such as GaN and SiC are being introduced into the market and these are showing promising results in the converter circuits as described in this thesis. Comparative parameters of the semi conductor materials such as the</p><p>vi</p><p>energy band gap, field strengths and figure of merit have been discussed. In this case, the comparison of an existing Si MOSFET with that of a GaN MOSFET has been evaluated using a multi-layered coreless PCB step-down power transformer for the given input/output specifications of the flyback converter circuit. It has been determined that the energy efficiency of the 45 to 15V regulated converter using GaN was improved by 8-10% compared to the converter using the Si MOSFET due to the gate drive power consumption, lower conduction losses and improved rise/fall times of the switch.</p><p>For some of the AC/DC and DC/DC applications such as laptop adapters, set-top-box, and telecom applications, high voltage power MOSFETs used in converter circuits possess higher gate charges as compared to that of the low voltage rating MOSFETs. In addition, by operating them at higher switching frequencies, the gate drive power consumption, which is a function of frequency, increases. The switching speeds are also reduced due to the increased capacitance. In order to minimize this gate drive power consumption and to increase the frequency of the converter, a cascode flyback converter was built up using a multi-layered coreless PCB transformer and this was then evaluated. Both simulation and experimental results have shown that with the assistance of the cascode flyback converter the switching speeds of the converter were increased including the significant improvement in the energy efficiency compared to that of the single switch flyback converter.</p><p>In order to further maximize the utilization of the transformer, to reduce the voltage stress on MOSFETs and to obtain the maximum power density from the power converter, double ended topologies were chosen. For this purpose, a gate drive circuitry utilising the multi-layered coreless PCB gate drive transformer was designed and evaluated in both a Half-bridge and a Series resonant converter. It was found that the gate drive power consumption using this transformer was less than 0.8W for the frequency range of 1.5-3.5MHz. In addition, by using this gate drive circuitry, the maximum energy efficiency of the series resonant converter was found to be 86.5% with an output power of 36.5W.</p> Mon, 13 Jun 2011 15:09:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-13964 Victor Kardeby Automatic sensor clustering : connectivity for the internet of things http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-13953 <p>Current predictions from industry envision that within a decade, the Internet will bepopulated by tens of billion of devices. Already today, smart Internet devices havesensors that provide an enormous potential for creating new applications. The chal-lenge at hand is how this information can be shared on the future Internet in order tounlock the full capability of applications to interact with the real world. Therefore,there is an urgent need for scalable and agile support for connecting people, placesand artifacts in applications via a vast number of devices and sensors on the futureInternet. Clearly, this poses a challenge of sharing and thus storage of so-called con-text information. Beyond scalable context storage lays another challenge to identifyand locate devices that are important to the user. In a support for billion of contin-uously changing sensors and actuators, a search engine would not work. Thereforean intelligent way to group devices is required. This thesis deals with mainly three issues: Firstly, propose a method for devicesto be reachable and thus addressable independent of their location in the infrastruc-ture. Secondly, how can the proposed method be used to insure automatic connectiv-ity anywhere between clients and services offered by the device, in particular associ-ated sensors and actuators. Thirdly, how can the grouping and support be combinedand used to dynamically associate sensors from across the Internet with applications,assuming that the aforementioned grouping exists. The proposed solution to the firstissue is to store identifier-locator pairs in an overlay. For the second issue we pro-pose a sensor socket introduced which exploits the identifier/locator pairs to enabledevice mobility. The third issue is addressed by providing a group-cast operation inthe sensor socket. This arrangement allows communication with peers determinedby a grouping algorithm which operates on context information on the context over-lay. Thus we have enabled the creation of automated dynamic clustering of sensorsand actuators in the Internet of Things.The sensor socket is designed as a stand-alone module to support any contextoverlay that provides the same basic functionality. The sensor socket embodiesa support to automatically interconnect and communicate with devices. Using abridging software, remote devices can be dynamically found and inserted into legacylocal area network where current devices can benefit from the connectivity. For fu-ture work the bridge can be extended to actively locate and identify nearby sensorsthat are unable to participate in the overlay network otherwise.</p> Fri, 10 Jun 2011 08:40:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-13953 Khursheed Khursheed Investigation of intelligence partitioning in wireless visual sensor networks http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-14445 <p>The wireless visual sensor network is an emerging field which is formed by deploying many visual sensor nodes in the field and in which each individual visual sensor node contains an image sensor, on board processor, memory and wireless transceiver. In comparison to the traditional wireless sensor networks, which operate on one dimensional data, the wireless visual sensor networks operate on two dimensional data which requires higher processing power and communication bandwidth. Research focus within the field of wireless visual sensor networks has been on two different extremes, involving either sending raw data to the central base station without local processing or conducting all processing locally at the visual sensor node and transmitting only the final results.This research work focuses on determining an optimal point of hardware/software partitioning at the visual sensor node as well as partitioning tasks between local and central processing, based on the minimum energy consumption for the vision processing tasks. Different possibilities in relation to partitioning the vision processing tasks between hardware, software and locality for the implementation of the visual sensor node, used in wireless visual sensor networks have been explored. The effect of packets relaying and node density on the energy consumption and implementation of the individual wireless visual sensor node, when used in a multi-hop wireless visual sensor networks have also been explored.The lifetime of the visual sensor node is predicted by evaluating the energy requirement of the embedded platform with a combination of the Field Programmable Gate Arrays (FPGA) and the micro-controller for the implementation of the visual sensor node and, in addition, taking into account the amount of energy required for receiving/forwarding the packets of other nodes in the multi-hop network.Advancements in FPGAs have been the motivation behind their choice as the vision processing platform for implementing visual sensor node. This choice is based on the reduced time-to-market, low Non-Recurring Engineering (NRE) cost and programmability as compared to ASICs. The other part of the architecture of the visual sensor node is the SENTIO32 platform, which is used for vision processing in the software implementation of the visual sensor node and for communicating the results to the central base station in the hardware implementation (using the RF transceiver embedded in SENTIO32).</p> Tue, 6 Sep 2011 09:05:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-14445 Anette Karlsson High consistency hydrogen peroxide bleaching of Norway spruce mechanical pulps http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-12984 Fri, 14 Jan 2011 14:30:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-12984 Kerstin Andersson Lignin in wastewater generated by mechalical pulping : Chemical characterisation and removal by adsorption http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-12983 Fri, 14 Jan 2011 14:21:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-12983 Fredrik Linnarsson Wireless sensor networks in loader crane applications http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-12981 Fri, 14 Jan 2011 13:51:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-12981 Xin Huang Sensor application privacy and security http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-12978 Fri, 14 Jan 2011 13:34:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-12978 Maria Bogren EN UTOPISK IDÉ? : Medverkan på (o)lika villkor i utvecklingspartnerskap http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-12768 <p><strong>A utopian idea? </strong></p><p><strong>Participation Under (Dis)Similar Conditions inDevelopment Partnerships</strong></p><p><strong>Maria Bogren</strong></p><p> </p><p>Department of Social Sciences</p><p>Mid Sweden University, SE-831 25 Östersund, Sweden</p><p>ISSN 1652-8948, Mid Sweden University Licentiate Thesis 53;</p><p>ISBN 978-91-86694-12-8</p><p> </p><p> </p><h1>Abstract</h1><p>Important societal issues nowadays do not get resolved through the care of the state; it is rather the case that solutions involve multiple actors.  Such cooperation can be organized in partnership where actors from the public sector, private companies and non-profit organizations, for example, attempt to find solutions to a current societal issue. The target group affected by the problems can also be involved in the partnership. The aim of this study is to contribute to an increased understanding regarding cooperation in partnerships and especially the target group’s participation in partnerships. An idea regarding the target group’s participation stems from the European level to the national level and finally to the local level in a development partnership. I follow the local development partner-ship for two years with a view to examining the translation process of the idea regarding the target group’s participation. Data was collected through interviews, relevant documents and observation. What is more, the significance of the institutional surroundings regarding what happens in the partnership is discussed. The idea regarding the target group’s participation manifests itself on: a) how the target group should be represented; b) how it gains influence and c) how the role of the target group’s representatives should be shaped. The study shows that ideas change and adjust over time and also that the target group participates under different conditions compared to the rest of the representatives in the partnership. A way to strengthen the target group’s participation in the partnership can be through further organizing, thus increasing the legitimization level of the target group.</p><p> </p><p>Keywords: translation, target group, participation, public-private partnerships</p><p><strong>SAMMANFATTNING</strong></p><p> </p><p>Angelägna samhällsproblem får numera inte alltid sin lösning genom statens försorg utan istället involveras flera aktörer. Ett sådant samarbete kan organiseras i partner­skap där aktörer från exempelvis offentlig sektor, företag och ideella organisationer tillsammans försöker hitta lösningar på något aktuellt samhällsproblem. I partner­ska­pet kan även den målgrupp som berörs av problematiken involveras. Syftet med denna studie är att bidra till ökad förståelse när det gäller samverkan i partnerskap och speciellt när det gäller målgruppens medverkan i partnerskap. En idé om målgruppens medverkan följs från europeisk nivå, till nationell nivå och slutligen till lokal nivå i ett utvecklingspartnerskap. Jag följer det lokala utvecklings­partnerskapet under två års tid och använder dokumentstudier, intervjuer och obser­vationer för att studera översättningsprocessen av idén om målgruppens medverkan. Dessutom diskuteras den institutionella omgivningens betydelse för det som händer i partnerskapet. Idén om målgruppens medverkan tar sig uttryck i idéer om hur målgruppen ska finnas repre­sen­terad och hur den ska få inflytande samt hur rollen som målgrupps­rep­resen­tant ska utformas. Studien visar att idéerna förändras och anpassas över tid samt att målgruppen medverkar på olika villkor jämfört med övriga representanter i partnerskapet. Ett sätt att stärka målgruppens medverkan i partnerskapet kan vara genom ytterligare organisering för att på så sätt ge ökad legitimitet för målgruppen.</p><p> </p><p>Nyckelord: översättning, målgrupp, medverkan, privatoffentliga partnerskap</p> Wed, 15 Dec 2010 15:25:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-12768 Lisa Öberg Treeline dynamics in short and long term perspectives : observational and historical evidence from the southern Swedish Scandes http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-12670 <p>Against the background of past, recent and future climate change, the present thesis addresses elevational shifts of alpine treelines in the Swedish Scandes. By definition, <em>treeline </em>refers to the elevation (m a.s.l.) at a specific site of the upper trees of a specific tree species, at least 2 m tall.</p><p>Based on historical records, the first part of the thesis reports and analyzes the magnitude of treeline displacements for the main trees species (<em>Betula pubescens</em> ssp. <em>czerepanovii</em>, <em>Picea abies</em> and <em>Pinus sylvestris</em>) since the early 20th century. The study covered a large and heterogeneous region and more than 100 sites. Concurrent with temperature rise by c. 1.4 °C over the past century, maximum treeline advances of all species amount to about 200 m. That is virtually what should be predicted from the recorded temperature change over the same period of time. Thus, it appears that under ideal conditions, treelines respond in close equilibrium with air temperature evolution. However, over most parts of the landscape, conditions are not that ideal and treeline upshifts have therefore been much smaller. The main reason for that discrepancy was found to be topoclimatic constraints, i.e. the combined action of geomorphology, wind, snow distribution, soil depth, etc., which over large parts of the alpine landscape preclude treelines to reach their potential thermal limit.</p><p>Recorded treeline advance by maximum 200 m or so over the past century emerges as a truly anomalous event in late Holocene vegetation history.</p><p>The second part of the thesis is focused more on long-term changes of treelines and one specific and prevalent mechanism of treeline change. The first part of the thesis revealed that for <em>Picea</em> and <em>Betula</em>, treeline shift was accomplished largely by phenotypic transformation of old-established stunted and prostrate individuals (krummholz) growing high above the treeline. In obvious response to climate warming over the past century, such individuals have transformed into erect tree form, whereby the treeline (as defined here) has risen. As a means for deeper understanding of this mode of positional treeline change, extant clonal spruces, growing around the treeline, were radiocarbon dated from megafossil remains preserved in the soil underneath their canopies. It turned out that Picea<em> abies </em>in particular may attain almost eternal life due to its capability for vegetative reproduction and phenotypic plasticity. Some living clones were in fact inferred to have existed already 9500 years ago, and have thus persisted at the same spot throughout almost the entire Holocene. This contrasts with other tree species, which have left no living relicts from the early Holocene, when they actually grew equally high as the spruce. Thereafter they retracted by more than 300 m in elevation supporting that also on that temporal scale, treelines are highly responsive to climate change.</p><p>The early appearance of <em>Picea </em>in the Scandes, suggests that <em>Picea</em> “hibernated” the last glacial phase much closer to Scandinavia than earlier thought. It has also immigrated to northern Sweden much earlier than the old-established wisdom.</p><p>The experiences gained in this thesis should constitute essential components of any model striving to the project landscape ecological consequences of possible future climate shifts.</p> Wed, 15 Dec 2010 22:19:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-12670 Sheila Zimic OPENING THE BOX : Exploring the presumptions about the 'Net Generation' http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-12189 <p>There are many names or labels which refer to the generation growing up with digital media and these include labels such as ‘Net Generation’ (Tapscott, 1998), ‘digital natives’ (Prensky, 2001), ‘cyberkids’ (Holloway, 2003) and ‘MySpacegeneration’ (Rosen, 2008). The core idea behind these labels is that young people who have grown up surrounded by digital technology are very different to previous generations in their way of using and even thinking about the new digital technology. This appears to be reinforcing a generational divide and makes the assumption that young people can be categorized into one group in relation to their use of ICTs. The approach in this thesis is to empirically explore, in order to nuance, some of these presumptions about the ‘Net Generation’ (defined according to Tapscott). Thus, the research question is: How can the presumptions about the ‘Net Generation’ be nuanced?</p><p>The following three presumptions have been explored within the three papers included in the thesis: i) The ‘Net Generation’ diverges from previous generations in relation to the use of internet; ii) The ‘Net Generation’ is techno-savvy or digitally competent; iii) The digitally competent ‘Net Geners’ are also digital participants since there is a causal relationship between digital competence and digital participation. The explorations are conducted by using the theoretical concepts ‘digital skills’, ‘self-efficacy’ and ‘participatory culture’. Several hypotheses,deduced from previous research, have been tested on a national representative sample of people born between the years 1978 and 1997 (categorised as the ‘Net Generation’). The results show that ‘Net Geners’ internet usage is diversified;hence, it is simplified to talk about them as a homogeneous group. Those included in the categorisation have different opportunities to participate in the digital society. Their internet usage differs both in terms of how much time they spend and what they do online. Their digital skills and self-efficacy in the use of computers are also different and so is the perceived feeling of participation in the information society. This implies that the ‘Net Geners’ do not have equal conditions in relation to participation in the digital society. However, what is meant by participation is still an unresolved question which requires further exploration.</p> Wed, 3 Nov 2010 09:39:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-12189 Ludovic Gustafsson Coppel Whiteness and Fluorescence in Paper : Perception and Optical Modelling http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-12143 <p>This thesis is about modelling and predicting the perceived whiteness of plain paper from the paper composition, including fluorescent whitening agents. This includes psycho-physical modelling of perceived whiteness from measurable light reflectance properties, and physical modelling of light scattering and fluorescence from the paper composition.</p><p>Existing models are first tested and improvements are suggested and evaluated. The standardised and widely used CIE whiteness equation is first tested on commercial office papers with visual evaluations by different panels of observers, and improved models are validated. Simultaneous contrast effects, known to affect the appearance of coloured surfaces depending on the surrounding colour, are shown to significantly affect the perceived whiteness. A colour appearance model including simultaneous contrast effects  (CIECAM02-m2), earlier tested on coloured surfaces, is successfully applied to perceived whiteness. A recently proposed extension of the Kubelka-Munk light scattering model including fluorescence for turbid media of finite thickness is successfully tested for the first time on real papers.</p><p>It is shown that the linear CIE whiteness equation fails to predict the perceived whiteness of highly white papers with distinct bluish tint. This equation is applicable only in a defined region of the colour space, a condition that is shown to be not fulfilled by many commercial office papers, although they appear white to most observers. The proposed non-linear whiteness equations give to these papers a whiteness value that correlates with their perceived whiteness, while application of the CIE whiteness equation outside its region of validity overestimates perceived whiteness.</p><p>It is shown that the quantum efficiency of two different fluorescent whitening agents (FWA) in plain paper is rather constant with FWA type, FWA concentration, filler content, and fibre type. Hence, the fluorescence efficiency is essentially dependent only on the ability of the FWA to absorb light in its absorption band.  Increased FWA concentration leads accordingly to increased whiteness. However, since FWA absorbs light in the violet-blue region of the electromagnetic spectrum, the reflectance factor decreases in that region with increasing FWA amount. This violet-blue absorption tends to give a greener shade to the paper and explains most of the observed greening and whiteness saturation at larger FWA concentrations. A red-ward shift of the quantum efficiency is observed with increasing FWA concentration, but this is shown to have a negligible effect on the whiteness value.</p><p>The results are directly applicable to industrial applications for better instrumental measurement of whiteness and thereby optimising the use of FWA with the goal to improve the perceived whiteness. In addition, a modular Monte Carlo simulation tool, Open PaperOpt, is developed to allow future spatial- and angle-resolved particle level light scattering simulation.</p><p> </p> Tue, 19 Oct 2010 10:04:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-12143 Jens Persson Homogenization of Some Selected Elliptic and Parabolic Problems Employing Suitable Generalized Modes of Two-Scale Convergence http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-11991 <p>The present thesis is devoted to the homogenization of certain elliptic and parabolic partial differential equations by means of appropriate generalizations of the notion of two-scale convergence. Since homogenization is defined in terms of H-convergence, we desire to find the H-limits of sequences of periodic monotone parabolic operators with two spatial scales and an arbitrary number of temporal scales and the H-limits of sequences of two-dimensional possibly non-periodic linear elliptic operators by utilizing the theories for evolution-multiscale convergence and λ-scale convergence, respectively, which are generalizations of the classical two-scale convergence mode and custom-made to treat homogenization problems of the prescribed kinds. Concerning the multiscaled parabolic problems, we find that the result of the homogenization depends on the behavior of the temporal scale functions. The temporal scale functions considered in the thesis may, in the sense explained in the text, be slow or rapid and in resonance or not in resonance with respect to the spatial scale function. The homogenization for the possibly non-periodic elliptic problems gives the same result as for the corresponding periodic problems but with the exception that the local gradient operator is everywhere substituted by a differential operator consisting of a product of the local gradient operator and matrix describing the geometry and which depends, effectively, parametrically on the global variable.</p> Mon, 20 Sep 2010 15:33:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-11991 Mats Ainegren Mats Ainegren The rolling resistances of roller skis and their effects on human performance during treadmill roller skiing http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-10844 <p>Modern ski-treadmills allow cross-country skiers, biathletes and ski-orienteers to test their physical performance in a laboratory environment using classical and freestyle techniques on roller skis. For elite athletes the differences in performance between test occasions are quite small, thus emphasising the importance of knowing the roller skis’ rolling resistance coefficient, µ<sub>R</sub>, in order to allow correct comparisons between the results, as well as providing the opportunity to study work economy between different athletes, test occasions and core techniques.</p><p>Thus, one of the aims of this thesis was to evaluate how roller skis’ µ<sub>R</sub> is related to warm-up, mass, velocity and inclination of the treadmill. It was also necessary to investigate the methodological variability of the rolling resistance measurement system, RRMS, specially produced for the experiments, with a reproducibility study in order to indicate the validity and reliability of the results.</p><p>The aim was also to study physiological responses to different µ<sub>R</sub> during roller skiing with freestyle and classical roller skis and techniques on the treadmill as a case in which all measurements were carried out in stationary and comparable conditions.</p><p>Finally, the aim was also to investigate the work economy of amateurs and female and male junior and senior cross-country skiers during treadmill roller skiing, i.e. as a function of skill, age and gender, including whether differences in body mass causes significant differences in external power per kg due to differences in the roller skis’ µ<sub>R</sub>.</p><p>The experiments showed that during a warm-up period of 30 minutes, µ<sub>R</sub> decreased to about 60-65% and 70-75% of its initial value for freestyle and classical roller skis respectively. For another 30 minutes of rolling no significant change was found. Simultaneous measurements of roller ski temperature and m<sub>R</sub> showed that stabilized m<sub>R</sub> corresponds to a certain running temperature for a given normal force on the roller ski. The study of the influence on m<sub>R</sub> of normal force, velocity and inclination produced a significant influence of normal force on m<sub>R</sub>, while different velocities and inclinations of the treadmill only resulted in small changes in m<sub>R</sub>. The reproducibility study of the RRMS showed no significant differences between paired measurements with either classical or the freestyle roller skis.</p><p>The study of the effects on physiological variables of ~50% change in µ<sub>R</sub>,showed that during submaximal steady state exercise, external power, oxygen uptake, heart rate and blood lactate were significantly changed, while there were non significant or only small changes to cycle rate, cycle length and ratings of perceived exertion. Incremental maximal tests showed that time to exhaustion was significantly changed and this occurred without a significantly changed maximal power, maximal oxygen uptake, maximal heart rate and blood lactate, and that the influence on ratings of perceived exertion was non significant or small.</p><p>The final part of the thesis, which focused on work economy, found no significant difference between the four groups of elite competitors, i.e. between the two genders and between the junior and senior elite athletes. It was only the male amateurs who significantly differed among the five studied groups. The study also showed that the external power per kg was significantly different between the two genders due to differences in body mass and m<sub>R</sub>, i.e. the lighter female testing groups were roller skiing with a relatively heavier rolling resistance coefficient compared to the heavier testing groups of male participants.</p> Thu, 7 Jan 2010 10:18:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-10844 Maria Eriksson Creating customer value in commercial experiences http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-10599 <p>There is a type of business offering gaining much attention, both in the media and in financial figures, which provides the customer with something extra, something to be remembered. This offering is a commercial experience. It is claimed that commercial experiences are different from traditional industry and mass-production and even separated from goods and services. The possibility of creating something extraordinary in order to gain profit is of increasing interest in today’s business world. Consumers are seeking for experiences to reach a higher level of personal growth, an experience that create personal identity and lead to long-lasting memories. This is something an increasing amount of consumers are willing to pay money for - the commercial experience market.</p><p>The purpose of this thesis is to contribute knowledge about and a deeper understanding of commercial experiences, both in general and especially with regard to how customer value is created. The focus of the research was also to strengthen and support organizations that offer commercial experiences. In order to fulfill the purpose, two case studies were conducted with different focal points. The first aimed to find best practice and explore excellent ways of working when providing commercial experiences. The second study aimed to identify the needs for improvement to strengthen organizations offering commercial experiences.</p><p>According to my findings, there seems to be several distinctions between commercial experiences and goods and services. These include; the level of price, the time spent by the customer, the customer affect as strongly emotional and maybe most importantly, the finding that commercial experiences create a higher level of customer value than goods and services. All this proves that the commercial experience is to be considered an offering on its own, a refined customer offer of higher value. Since commercial experiences are said to engage customers in an inherently memorable way, reaching a higher level of customer value than goods and services, is seen as a critical factor. Understanding what the customer really wants, needs and what builds customer value when offering commercial experiences then become particularly important as drivers of success. When studying a particular organization for best practice, several similarities between providing commercial experiences and working according to the core values of TQM were found and established as a factor of business excellence. Further when it comes to providing commercial experiences storytelling, theming and a creative environment stood out as additional factors of business excellence. Moreover, selecting the right co-workers based on their values rather than merely their skills and academic qualifications was seen as an important factor of success. The co-worker is often the co-creator of the experience together with the customer and therefore has an important part to play in the organization. Creating a corporate culture with co-workers sharing the values is seen as essential in order to run a successful business. It appears that any type of organization can provide an experience for the customer, the key is adding on the extra value to reach the level of attractive quality. The commercial experience is described as deeply affecting both the feelings and senses of the customer, resulting in new memories; it is a memorable event the customer is willing to pay for. The commercial experience contains elements of engagement, personal relevance, novelty, surprise and learning and is not limited to certain types of businesses. The fact that this is an area of increasing business interest but as yet a poorly explored one indicates that there is a need to develop improved ways of working, tools and methods, tailor-made for providing commercial experiences. Improved tools for identifying customer expectations and measuring customer satisfaction are clearly needed, especially since this is a growing industry that cannot be ignored. Welcome to further explore the experience economy where new memories are so highly valued that people are prepared to pay for them!</p> Wed, 9 Dec 2009 13:26:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-10599 Pernilla Ingelsson How to create a commercial experience : Focus on Leadership, Values and Organizational Culture http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-10598 <p>A new kind of commercial offer is on the rise, that of a commercial experience. It is said to be the next progression of value after a service and that it is distinct from a service in several ways, two important being a) the provider having to create something new or memorable to the customer, i.e. creating attractive quality, and b) the offer being a co-creation between the customer and the provider.</p><p>Little has been written though about how creating a commercial experience can affect the way organizations should work. One of the areas that ought to be affected is the way organizations work to shape and coordinate co-workers and leaders behaviors by having a common set of values, or in other words a strong organizational culture.  A number of studies show that the leaders in an organization have a strong influence on its culture while others show that working with Total Quality Management (TQM) can enhance the corporate values and lead to profitable organizations.</p><p>The purpose of this thesis was to explore and contribute knowledge about how to create a commercial experience. The more specific purpose was to explore this area in relation to leadership, values, organizational culture and TQM.</p><p>To fulfill these purposes two case studies were carried out with the intention of finding ways of working. The first focused on how a renowned organization that offers commercial experiences works and the second on organizations offering commercial experiences in the region of Jämtland.</p><p>One conclusion drawn from the research is that methodologies and tools that aim directly to enhance the organization´s values and hence its culture might be of even more importance in organizations offering a commercial experience. It seems to be important to be aware that values need to be translated into behaviors to make them understandable in the organization. Storytelling is one tool that can be used as an enhancer of organizational culture, a tool that might be a fairly unrecognized for this purpose. It is also evident that the leadership practiced within the organization is crucial if a strong organizational culture is to be achieved.</p><p>Further, strategies for selecting the right values appear to be important when trying to create a strong organizational culture - a strategy not so evident within TQM. This could be one area where TQM needs to be developed in order to support the creation of a commercial experience but also to implement TQM more effectively.</p> Wed, 9 Dec 2009 13:27:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-10598 Anna Nilsson Identification and Syntheses of Semiochemicals Affecting Mnesampela privata and Trioza apicalis http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-10596 <p>The Autumn gum moth, <em>Mnesampela privata </em>(Lepidoptera: Geometridae) is an endemic Australian moth whose larvae feed upon species of <em>Eucalyptus.</em> The moths favorite host plants are <em>E. globulus </em>and<em> E. nitens</em> which are the most important species used in commercial plantations of the Australian pulpwood industry. The autumn gum moth has become one of the most significant outbreak insects of eucalyptus plantations throughout Australia. As a consequence great financial losses to the forest industry occur. Today insecticides such as pyrethroids are used for control of eucalyptus defoliators as <em>M. privata</em>.</p><p>The carrot psyllid, <em>Trioza apicalis </em>(Homoptera: Psylloidea), is one of the major pests of carrot (<em>Daucus carota</em>) in northern Europe. The psyllid causes curling of the carrot leafs and reduction of plant growth. Today the carrot crops are protected with the pyrethroid insecticide cypermethrin, which is toxic to aquatic organisms and is, from 2010, prohibited for use in Sweden by the Swedish Chemicals Inspectorate.</p><p>An alternative to insecticides is to protect the seedlings with semiochemicals, a chemical substance or mixture of them that carries a message. This thesis describes the identification and the syntheses of semiochemicals from the above mentioned insect species.</p><p>From analysis of abdominal tip extracts of <em>M. privata</em> females from Tasmania a blend of (3<em>Z</em>,6<em>Z</em>,9<em>Z</em>)-3,6,9-nonadecatriene and (3<em>Z,6Z,9Z</em>)-3,6,9-heneicosatriene was identified as the sex pheromone of this species. The identification of the C<sub>19</sub>- and C<sub>21</sub>-trienes was confirmed by synthesis.</p><p>In the analysis of carrot leaf extracts we found a compound, α-<em>cis</em>-bergamotene, that induces antennal response in the carrot psyllid. This is just the beginning of the studies of trying to manipulate this psyllid with semiochemicals instead of insecticides.</p> Wed, 9 Dec 2009 08:48:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-10596 Marie Lund Ohlsson New methods for movement technique development in cross-country skiing using mathematical models and simulation http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-10041 <p>This Licentiate Thesis is devoted to the presentation and discussion of some new contributions in applied mathematics directed towards scientific computing in sports engineering. It considers inverse problems of biomechanical simulations with rigid body musculoskeletal systems especially in cross-country skiing. This is a contrast to the main research on cross-country skiing biomechanics, which is based mainly on experimental testing alone. The thesis consists of an introduction and five papers. The introduction motivates the context of the papers and puts them into a more general framework. Two papers (D and E) consider studies of real questions in cross-country skiing, which are modelled and simulated. The results give some interesting indications, concerning these challenging questions, which can be used as a basis for further research. However, the measurements are not accurate enough to give the final answers. Paper C is a simulation study which is more extensive than paper D and E, and is compared to electromyography measurements in the literature. Validation in biomechanical simulations is difficult and reducing mathematical errors is one way of reaching closer to more realistic results. Paper A examines well-posedness for forward dynamics with full muscle dynamics. Moreover, paper B is a technical report which describes the problem formulation and mathematical models and simulation from paper A in more detail. Our new modelling together with the simulations enable new possibilities. This is similar to simulations of applications in other engineering fields, and need in the same way be handled with care in order to achieve reliable results. The results in this thesis indicate that it can be very useful to use mathematical modelling and numerical simulations when describing cross-country skiing biomechanics. Hence, this thesis contributes to the possibility of beginning to use and develop such modelling and simulation techniques also in this context.</p> Thu, 15 Oct 2009 13:12:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-10041 Lisa Nordin Measurement and prediction of dewatering characteristics for mechanical pulpsusing optical fibre analysis http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-10001 <p>On‐line measurement of relevant fibre and pulp characteristics is necessary in order to increase</p><p>productivity and to maintain uniform quality. The drainage properties within the wire and the press</p><p>section are important factors since they affect the dry content after the press section. The higher the</p><p>dry content, the less the steam consumption and thus less energy is consumed. In some cases the</p><p>drier section has a limiting capacity and thus decreased web dryness will reduce the production. The</p><p>runnability in the paper machine is also affected by the dry content after the press section, because</p><p>web breaks might occur in the drier section or in the calendar.</p><p>The long term aim of this work was to obtain an on‐line measurement of dewatering behaviour in</p><p>paper machines based on optically measured fibre and fines characteristics. However, due to the</p><p>difficulty in obtaining pulps with sufficient distribution in dewatering properties and the difficulty in</p><p>varying the pulps characteristics on one single paper machine, a comparative study between four</p><p>different laboratory dewatering methods were conducted as a first step. Optical measured fibre</p><p>characteristics were used to attempt to predict the dewatering behaviour of the different laboratory</p><p>equipments for different mechanical pulps. In addition, a designed experiment was conducted in</p><p>order to further evaluate the quality of the optical fibre and fines measurement.</p><p>The results showed that there are rough correlations between the dewatering equipments; however</p><p>they rank the pulps differently depending on the wood raw material used and whether the refining</p><p>conditions are gentle or harsh. The prediction models formulated for the dewatering equipments</p><p>based on optically measured fibre characteristics showed rather good correlation between the</p><p>measured versus the calculated values; however, not sufficiently good for use in on‐line applications.</p><p>It was also found that the same measured fines amounts show different dewatering behaviour,</p><p>depending on the quality of the fines used. The difference in fines quality was, however, not</p><p>reflected in the optical measurement and it was thus concluded that there is a need for higher</p><p>resolution of the measurement equipments in order to make it possible to measure the shape and</p><p>the exact amount of the fines.</p><p>The results obtained from this work have provided an increase in both knowledge and understanding</p><p>and can hopefully be utilized in characterizing paper machine dewatering with on‐line measurements</p><p>of fibre properties in the future.</p> Thu, 8 Oct 2009 15:09:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-10001 Karin Walter Influence of acid hydrogen peroxide treatment on refining energy and TMP properties http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-9207 <p>The potential of using acid hydrogen peroxide under Fenton conditions to lower the electrical energy consumed during the production of Black spruce (Picea mariana) thermomechanical pulp (TMP) was investigated. The chemical system, which consisted of ferrous sulphate, hydrogen peroxide and optionally an enhancer (3,4-dimethoxybenzyl alcohol, ethylenediaminetetraacetic acid or oxalic acid/sodium oxalate), was evaluated as an inter-stage treatment where the primary refiner was used as a mixer. The produced TMPs were thoroughly characterised in order to explain the effect of the chemical system on fibre development and to be able to propose a mechanism for the impact on refining energy reduction. The possibility to improve the optical properties by washing, chelating and sodium dithionite or hydrogen peroxide bleaching the treated pulps was evaluated.</p><p> </p><p>The results obtained in a pilot plant trial show that it is possible to significantly reduce the comparative specific energy consumption by approximately 20% and 35% at a freeness value of 100 ml CSF or a tensile index of 45 Nm/g by using 1% and 2% hydrogen peroxide respectively. The energy reduction is obtained without any substantial change in the fractional composition of the pulp, though tear strength is slightly reduced, as are brightness and pulp yield. No major differences between the reference pulp and the chemically treated pulps were found with respect to fibre length, width or cross-sectional dimensions. However, the acid hydrogen peroxide-treated pulps tend to have more collapsed fibres, higher flexibility, a larger specific surface area and a lower coarseness value. The yield loss accompanying the treatment is mainly a consequence of degraded hemicelluloses. It was also found that the total charge of the chemically treated pulps is higher compared to the reference pulps, something that may have influenced the softening behaviour of the fibre wall.</p><p> </p><p>A washing or chelating procedure can reduce the metal ion content of the chemically treated TMPs considerably. The amount of iron can be further reduced to a level similar to that of untreated pulps by performing a reducing agent-assisted chelating stage (QY) with dithionite. The discoloration cannot, however, be completely eliminated. The brightness decrease of the treated pulps is thus not only caused by higher iron content in the pulp, but is also dependent on the type of iron compound and/or other coloured compounds connected with the acid hydrogen peroxide treatment. Oxidative bleaching with hydrogen peroxide (P) is more effective than reductive bleaching with sodium dithionite in regaining the brightness lost during the energy reductive treatment. Using a QY P sequence, a hydrogen peroxide charge of 3.8% was needed to reach an ISO brightness of 75% for the chemically treated pulps. The corresponding hydrogen peroxide charge for the untreated TMP reference was 2.5%.</p><p> </p><p>The radicals generated in the Fenton reaction will probably attack and weaken/soften the available outer fibre wall layers. This could facilitate fibre development and consequently lower the electrical energy demand for a certain degree of refinement.</p> Wed, 24 Jun 2009 08:30:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-9207 Christine Malmgren Nanoscaled Structures in Ruthenium Dioxide Coatings http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-8728 <p>An essential ingredient in the generation of environmentally compatible pulp bleaching chemicals is sodium chlorate. Chlorate is produced in electrochemical cells, where the electrodes are the key components. In Sweden the so-called DSA !R electrodes with catalytic coatings have been produced for more than 35 years. The production of chlorate uses a large amount of electric energy, and a decrease of just five percent of this consumption would, globally, decrease the consumption of electrical energy corresponding to half a nuclear power reactor. The aim of this project is to improve the electrode design on the nanoscale to decrease the energy consumption. The success of the DSA!R depends on the large catalytic area of the coating, however, little is known about the actual structure at the nanometer level. To increase the understanding of the nanostructure of these coatings, we used a number of methods, including atomic force microscopy, transmission electron microscopy, X-ray diffraction, porosimetry, and voltammetric charge. We found that the entire coating is built up of loosely packed rutile mono-crystalline 20 − 30 nm sized grains. The small grain sizes give a the large area, and consequently, lower cell-voltage and reduced energy consumption. A method to control the grain size would thus be a way to control the electrode efficiency. To alter the catalytically active area, we made changes in the coating process parameters. We found a dependency of the crystal-grain sizes on the choice of ruthenium precursor and processing temperature. The use of ruthenium nitrosyl nitrate resulted in smaller grains than ruthenium chloride and lowering the temperature tended to favour smaller grains. A more radical way would be to create a totally different type of electrode, manufactured in another way than using the 1965 DSA !R recipe. Such new types of electrodes based on, for example, nanowires or nanoimprint lithography, are discussed as future directions.</p> Thu, 19 Mar 2009 12:59:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-8728 Sofia Reyier Bonding Ability Distribution of Fibers in Mechanical Pulp Furnishes http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-8033 <p>This thesis presents a method of measuring the distribution of fiber bonding ability in mechanical pulp furnishes. The method is intended for industrial use, where today only average values are used to describe fiber bonding ability, despite the differences in morphology of the fibers entering the mill. Fiber bonding ability in this paper refers to the mechanical fiber’s flexibility and ability to form large contact areas to other fibers, characteristics required for good paper surfaces and strength.</p><p> </p><p>Five mechanical pulps (Pulps A-E), all produced in different processes from Norway spruce (<em>Picea Abies)</em> were fractionated in hydrocyclones with respect to the fiber bonding ability. Five streams were formed from the hydrocyclone fractionation, Streams 1-5. Each stream plus the feed (Stream 0) was fractionated according to fiber length in a Bauer McNett classifier to compare the fibers at equal fiber lengths (Bauer McNett screens 16, 30, 50, and 100 mesh were used).</p><p> </p><p>Stream 1 was found to have the highest fiber bonding ability, evaluated as tensile strength and apparent density of long fiber laboratory sheets. External fibrillation and collapse resistance index measured in FiberLab<sup>TM</sup>, an optical measurement device, also showed this result. Stream 5 was found to have the lowest fiber bonding ability, with a consecutively falling scale between Stream 1 and Stream 5. The results from acoustic emission measurements and cross-sectional scanning electron microscopy analysis concluded the same pattern. The amount of fibers in each hydrocyclone stream was also regarded as a measure of the fibers’ bonding ability in each pulp.</p><p> </p><p>The equation for predicted Bonding Indicator (BIN) was calculated by combining, through linear regression, the collapse resistance index and external fibrillation of the P16/R30 fractions for Pulps A and B. Predicted Bonding Indicator was found to correlate well with measured tensile strength. The BIN-equation was then applied also to the data for Pulps C-E, P16/R30, and Pulp A-E, P30/R50, and predicted Bonding Indicator showed good correlations with tensile strength also for these fibers.</p><p> </p><p>From the fiber raw data measured by the FiberLab<sup>TM</sup> instrument, the BIN-equation was used for each individual fiber. This made it possible to calculate a BIN-distribution of the fibers, that is, a distribution of fiber bonding ability.</p><p> </p><p>The thesis also shows how the BIN-distributions of fibers can be derived from FiberLab<sup>TM</sup> measurements of the entire pulp without mechanically separating the fibers by length first, for example in a Bauer McNett classifier. This is of great importance, as the method is intended for industrial use, and possibly as an online-method. Hopefully, the BIN-method will become a useful tool for process evaluations and optimizations in the future.</p> Thu, 8 Jan 2009 08:23:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-8033 Lisbeth Hellström Fracture processes in wood chipping http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-7827 <p>In both the chemical and mechanical pulping process, the logs are cut into wood chips by a disc chipper before fibre separation. To make the wood chipping process more efficient, one have to investigate in detail the coupling between theprocess parameters and the quality of the chips. The objective of this thesis is to obtain an understanding of the fundamental mechanisms behind the creation of wood chips. Both experimental and analytical/numerical approaches have been taken inthis work. The experimental investigations were performed with an in‐house developed equipment and a digital speckle photography equipment. The results from the experimental investigation showed that the friction between the log and chipping tool is probably one crucal factor for the chip formation. Further more it was found that the indentation process is approximately self‐similar, and that the stress field over the entire crack‐plane is critical for chip creation. The developed analytical model predicts the normal and shear strain distribution. The analytical distributions are in reasonable agreement with the corresponding distributions obtained from a finite element analysis.</p> Wed, 20 May 2009 16:31:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-7827 Johan Jason Theory and Applications of Coupling Based Intensity Modulated Fibre-Optic Sensors http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-7057 <p>Optical fibre sensors can be used to measure a wide variety of properties. In some cases they have replaced conventional electronic sensors due to their possibility of performing measurements in environments suffering from electromagnetic disturbance, or in harsh environments where electronics cannot survive. In other cases they have had less success mainly due to the higher cost involved in fibre-optic sensor systems. Intensity modulated fibre-optic sensors normally require only low-cost monitoring systems principally based on light emitting diodes and photo diodes. The sensor principle itself is very simple when based on coupling between fibres, and coupling based intensity modulated sensors have found applications over a long time, mainly within position and vibration sensing. In this thesis new concepts and applications for intensity modulated fibre-optic sensors based on coupling between fibres are presented. From a low-cost and standard component perspective alternative designs are proposed and analyzed in order to find improved performance. The development of a sensor for an industrial temperature sensing application, involving aspects on multiplexing and fibre network installation, is presented. Optical time domain reflectometry (OTDR) is suggested as an efficient technique for multiplexing several coupling based sensors, and sensor network installation with blown fibre in micro ducts is proposed as a flexible and cost-efficient alternative to traditional cabling. A new sensor configuration using a fibre to a multicore fibre coupling and an image sensor readout system is proposed. With this system a high-performance sensor setup with a large measurement range can be realised without the need for precise fibre alignment often needed in coupling based sensors involving fibres with small cores. The system performance is analyzed theoretically with complete system simulations on different setups. An experimental setup is made based on standard fibre and image acquisition components, and differences from the theoretical performance are analyzed. It is shown that sub-µm accuracy should be possible to obtain, being the theoretical limit, and it is further suggested that the experimental performance is mainly related to two error sources: core position instability and differences between the real and the expected optical power distribution. Methods to minimize the experimental error are proposed and evaluated.</p> Fri, 14 Nov 2008 13:24:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-7057 Tomas Unander Characterization of Low Cost Printed Sensors for Smart Packaging http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-7049 <p>Currently there are very significant interests in printed electronics in the world. The possibility to produce electronics in a roll to roll printing process will considerably reduce the cost of the electronic devices. However, these new devices will most probably not replace the traditional silicon based electronics, but will be a complement in low cost applications such as in intelligent packages and other printable media. One interesting area is printable low cost sensors that add value to packages. In this thesis a study of the performance of low cost sensors is presented. The sensors were fabricated using commercial printing processes used in the graphical printing business. The sensors were characterized and evaluated for the intended application. The evaluated sensors were moisture sensing sensor solutions and touch sensitive sensor solutions.</p><p>A printable touch sensitive sensor solution is presented where the sensor is incorporated into a high quality image such as in point of sales displays. The sensor solution showed good touch sensitivity at a variety of humidity levels. Four printed moisture sensor concepts are presented and characterized. Firstly, a moisture sensor that shows good correlation to the moisture content of cellulose based substrates. Secondly, a sensor that measures the relative humidity in the air, the sensor has a measuring accuracy of 0.22% at high relative humidity levels. Thirdly, a moisture sensor that utilizes unsintered silver nano-particles to measure the relative humidity in the air, the sensor has a linear response at very low relative humidity levels. And fourth, an action activated energy cell that provides power when activated by moisture. A concept of remote moisture sensing that utilizes ordinary low cost RFID tags has also been presented and characterized. The remote sensor solution works both with passive and semi-passive RFID systems. The study shows that it is possible to manufacture low cost sensors in commercial printing processes.</p> Thu, 18 Dec 2008 13:14:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-7049 Maria Weimer-Löfvenberg Projektet Björntanden : Om beslutsprocesser, entreprenörskap och politik http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-6872 <p>The present licentiate thesis deals with decision processes, entrepreneurship, and politics in a local context. The study focuses on the local project <em>Björntanden</em> in Östersund, an entrepreneurial idea aiming at regional development. The purpose of the thesis is to understand the decision processes that enable or hinder the realization of entrepreneurial ideas in a local context.</p><p>The background of the study is the notion that regional development often is considered a condition for economic growth. Concepts such as local and entrepreneurial processes are commonly used in this context. These processes are seldom run in isolation by single actors but normally in cooperation with others. It is not uncommon that actors such as politicians and other representatives of local government and state agencies play an important role in influencing the conditions for local entrepreneurship. As decision processes in the private and public sector are different, this creates coordination problems.</p><p>Cooperation between many actors also tends to create coordination problems when actors with different organisational principles and organisational cultures meet in a joint arena. This is further accentuated when business people, civil servants, politicians and others are to cooperate in decision-making on entrepreneurial ideas that often are inherently unclear.</p><p>I followed the project <em>Björntanden</em> for four years by observations, interviews and studies of published and unpublished documents. On the basis of the experiences of the actors involved I have interpreted the meaning of their actions, i.e., what they have said and what they have done, in order to form the concepts used in the study. Through an interactive process between received theory and the gradually evolving results of the empirical study I have attempted to reach an understanding of the decision processes by linking local conditions, i.e. points of departure for entrepreneurial ideas in a local context, with different types of decision processes. The analysis indicates aspects that may create possibilities and space as well as obstacles for successful decision-making when entrepreneurship and politics are to act in cooperation.</p> Mon, 3 Nov 2008 08:46:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-6872 Niklas Lepistö FPGA based architectures for embedded video systems http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-188 Tue, 4 Mar 2008 08:45:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-188 Henrik Andersson Development of Process Technology for Photon Radiation Measurement Applications http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-9334 <p>This thesis presents work related to new types of photo detectors and their applications. The focus has been on the development of process technology and methods by means of experimentation and measurements. The overall aim has been to develop and improve photon radiation measurement applications which are possible to manufacture using standard Si processing technology.</p><p>A new type of position sensitive detector that has switching possibilities based on the MOS principle has been fabricated and characterized. The influence of mechanical stress on the linearity of position sensitive detectors has been investigated. The results show that mechanical stress arising, for example, by the mounting of detectors in capsules can have an impact on device performance. Under normal circumstances these effects are rather small, but are considered to be worthwhile taking into account.</p><p>Electroless deposition of Nickel including various dopants in porous silicon was performed to manufacture electrical contacts for this interesting material. After heat treatment it was confirmed by X-ray diffraction that Nickel silicide had been formed and I-V measurements show that different contacts exhibit Ohmic and rectifying behaviour.</p><p>Spectrometers are used extensively in the process and food industry to measure both the chemical content and the amount of substances used during manufacturing. These instruments are often rather bulky and costly, though the trend is towards smaller and more portable equipment. A spectrometer based on an array of Fabry-Perot interferometers mounted close to an array detector is shown to be a viable option for the manufacture of a very compact device. Such a device has minimal intermediate optics and it may be possible, in the future, for it to be developed and completely integrated with a detector array into a single unit.</p> Fri, 10 Jul 2009 12:47:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-9334 Jon Alfredsson Performance of Digital Floating-Gate Circuits Operating at Subthreshold Power Supply Voltages http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-9333 <p>All who is involved in electronic design knows that one of the critical issues</p><p>in today’s electronic is the power consumption. Designers are always looking for</p><p>new approaches in order to reduce currents while still retain performance.</p><p>Floating-gate (FGMOS) circuits have previously been shown to be a promising</p><p>technique to improve speed and still keep the power consumption low when</p><p>power supply is reduced below subthreshold voltage for the transistors.</p><p>In this thesis, the goal is to determine how good floating-gate circuits can be</p><p>compared to conventional static CMOS when the circuits are working in</p><p>subthreshold. The most interesting performance parameters are speed and power</p><p>consumption and specifically the Energy-Delay Product (EDP) that is a</p><p>combination of those two. To get a view over how the performance varies and how</p><p>good the FGMOS circuits are at their best case, the circuits have been designed and</p><p>simulated for best case performance.</p><p>The investigation also includes trade-offs with speed and power</p><p>consumption for better performance, how to select floating-gate capacitances, how</p><p>a large circuit fan-in will affect performance and also the influence of different</p><p>kinds of refresh circuits.</p><p>The first simulations of the FGMOS circuits in a 0.13 μm process have</p><p>several interesting results. First of all, in the best case it is shown that FGMOS has</p><p>potential to achieve up to 260 times in better EDP-performance compared to CMOS</p><p>at 150 mV power supply. Continuing with simulations of FGMOS capacitances</p><p>shows that minimum floating-gate capacitance can be as small as 400 fF and more</p><p>realistic performance shows that EDP is 37 times better for FGMOS (with parasitic</p><p>capacitances included). Other aspects of FGMOS design have been to look at how</p><p>refresh circuits will affect performance (semi-floating-gate circuits) and how a</p><p>larger fan-in will change noise margin and EDP. It turns out that refresh circuits</p><p>with the same transistor size does not give a noticeable change in performance</p><p>while an increase of 8 times in size will give between 5 and 10 times wors EDP.</p><p>When it comes to fan-in the simulations shows that a maximum fan-in of 5 is</p><p>possible at 250 mV supply and it decrease to 3 when supply voltage is reduced to</p><p>150 mV.</p><p>Finally, it should be kept in mind that tuning the performance of FGMOS</p><p>circuits with trade-offs and by changing the floating-gate voltages to achieve</p><p>results like the ones stated above will also always affect the noise margins, NM, of</p><p>the circuits. As a consequence of this, the NM will sometimes be so close to 1 that a</p><p>fabricated circuit with that NM may not be as functional as simulations suggests.</p><p>The probability to design functional FGMOS circuits in subthreshold does not</p><p>seem to be a problem though.</p> Fri, 10 Jul 2009 12:12:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-9333 Rahim Rahmani Models for quality of service in heterogenoeous networks http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-9331 <p>Both streaming techniques and wireless heterogeneous networks have recently become more widely deployed. As the current streaming techniques are primarily designed for homogenous wired networks, streaming multimedia applications in heterogeneous networks can perform poorly due to wireless networks conditions and vertical handover. These problems can significantly degrade the performance of streaming multimedia applications. Effects of the degradation are delay, jitter and packet loss resulting in lower multimedia quality. The use of some congestion recovery algorithms are detrimental factors to QoS. One approach used to control congestion in the network layer is Active Queue Management (AQM). This dissertation presents an evaluation and comparison of AQM mechanisms in heterogeneous networks. Based of the results of this research a new AQM algorithm, named the Adaptive AQM (AAQM) is proposed. The AAQM uses control law and link utilization in order to manage congestion. The action of the control law is to mark incoming packets in order to maintain the quotient between arriving and departing packets as low as possible. The AAQM enhances performance with respect to the queue length and packet loss as well as buffer space requirements. AAQM outperforms the other AQM algorithms in terms of multimedia packet loss and buffer delay.</p> Fri, 10 Jul 2009 11:56:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-9331 Maria Gylle Physiological responses of marine and brackish Fucus vesiculosus L. with respect to salinity http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-9329 <p>The intertidal brown alga Fucus vesiculosus L. is mainly a marine species (34 practical salinity units, psu), but the alga also grows in the sublittoral of the brackish Bothnian Sea (part of the Baltic Sea; 5 psu). The conditions at the growth sites are clearly different between the Bothnian Sea and the Norwegian Sea (part of Atlantic) with constant low salinity and a lack of tides in the Bothnian Sea. The objectives of the thesis were to compare the physiology in marine and brackish ecotypes of F. vesiculosus with respect to salinity and the ability of F. vesiculosus to acclimate to different salinities. A study of photosynthetic maximum capacity and relative amount of Rubisco in relation to salinity in brackish F. vesiculosus were also performed. The results showed that both ecotypes of F. vesiculosus have the same potential to use the available excitation energy for photochemistry. The results also suggest that this is relatively independent of salinity changes. There were a higher number of water soluble organic compounds, higher mannitol content (mmol kg‐1 DW), lower chlorophyll (Chl) content (mg g‐1 DW) and higher tolerance to desiccation in the marine ecotype. The number of water soluble carbon compounds did not change when the algae were treated to either high or low salinities and it was suggested that the differences were due to an intertidal or sublittoral acclimation, and not salinity. Both ecotypes showed changed mannitol content as a response to changed salinity but the changes were different between the ecotypes and seasons. The content of mannitol and the osmotic adjustment by mannitol in a longer timescale than 24 h appears to be closely connected to irradiance and photosynthesis in addition to the salinity. The main reason for higher rate of photosynthesis in higher salinity for the brackish ecotype is not clarified because no correlation could be detected between photosynthesis and the relative amount of Rubisco. The Chl content increased in darkness and the differences between the ecotypes are probably due to a compensation for low irradiance in the sublittoral growth site. Higher tolerance for desiccation in marine ecotype was concluded to be due to a lower rate of water loss because of more mannitol and thicker thallus.</p> Fri, 10 Jul 2009 11:38:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-9329 Robert de Bruijn Cardiovascular and hematological responses to voluntary apnea in humans http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-9328 <p>This thesis deals with cardiovascular and hematological responses to voluntary apnea in</p><p>humans, with a special focus on O2 usage and storage. Humans, and many other air‐breathing</p><p>animals, respond to apnea (breath holding) with a collection of interacting cardiovascular</p><p>reflexes, which are collectively called the diving response. In humans, the main characteristics of</p><p>the diving response are a reduction in heart rate (bradycardia), decreased cardiac output,</p><p>peripheral vasoconstriction and increased arterial blood pressure. Another response during</p><p>apnea in mammals, more recently also observed in man, is a transient increase in hemoglobin</p><p>concentration across a series of apneas, probably due a reduction in spleen size. There may also</p><p>be long‐term effects on erythropoiesis in the apneic diver, as suggested by high hemoglobin</p><p>levels observed in divers. The focus of the included studies are the short transient diving</p><p>response (I), the more slowly occurring transient hematological changes to apnea, most likely</p><p>related to a reduction in spleen size (II), and the possible effects of repeated apnea on serum</p><p>erythropoietin concentration (III).</p><p>I) The aim was to study the effects of body immersion on the O2‐conserving effect of the</p><p>human diving response. The results showed that, regardless of body immersion, apnea with face</p><p>immersion causes a stronger cardiovascular diving response compared to during apnea alone,</p><p>leading to a smaller reduction in arterial oxygen saturation levels. Thus the diving response is</p><p>triggered and conserves O2 even during whole‐body immersion, which has previously only been</p><p>observed during apnea without whole‐body immersion.</p><p>II) The aim was to study hematological responses to voluntary repeated maximal‐duration</p><p>apneas in divers and non‐divers. Increases in hemoglobin concentration were found across a</p><p>series of 3 apneas in elite breath‐hold divers, elite cross‐country skiers and untrained subjects.</p><p>However a larger increase in hemoglobin was found in divers compared to non‐divers, which</p><p>suggests a possible training effect of their extensive apnea‐specific training. In contrast, physical</p><p>endurance training does not appear to affect the hematological response to apnea.</p><p>III) The aim was to study the effects of serial voluntary apnea on the serum erythropoietin</p><p>concentration. In a comparison between elite breath‐hold divers and subjects untrained in apnea,</p><p>divers were found to have a 5% higher resting hemoglobin concentration. An average maximum</p><p>increase in erythropoietin of 24 % was found in untrained subjects after 15 maximal duration</p><p>apneas, preceded by 1 min of hyperventilation. This suggests a possible erythropoietic effect of</p><p>apnea‐induced hypoxia, which may connect the increased resting hemoglobin found in divers to</p><p>their apnea‐specific training.</p><p>It was concluded from these studies that man responds to apnea with a series of different</p><p>adjustments in order to limit O2 usage and increase O2 storage: The classical diving response is</p><p>effectively restricting O2‐consumption also during full immersion, the spleen related hemoglobin</p><p>increase occurs in both divers and non‐divers with different levels of physiological training, but</p><p>is more prominent in divers, and finally, the observed high levels of hemoglobin concentration in</p><p>divers may be related to enhanced erythropoiesis during dive training.</p> Fri, 10 Jul 2009 11:31:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-9328 Suliman A Abdalla Architecture and circuit design of photon counting readout for X-ray imaging sensors http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-9327 <p>Hybrid pixel array detectors for X-ray imaging are based on different technologies for sensor and readout electronics. The readout electronics are based on standard CMOS technologies that are experiencing continuously rapid improvements by means of down-scaling the feature sizes, which in turn lead to higher transistor densities, lower power consumption, and faster circuits. For pixel-array imaging sensors the improvements in CMOS technology opens up new possibilities of integrating more functionality in the pixels for local processing of the sensor data. However, new issues related to the tight integration of both analog and digital processing circuits within the small area of a pixel must also be evaluated.</p><p>The advantages of down-scaling the CMOS technology can be utilized to increase the spatial resolution by reducing the pixel sizes. Recent research indicates however that the bottleneck in reaching further spatial resolution in X-ray imaging sensors may not be limited by the circuit area occupied by the functions necessary in the pixels, but are instead related to problems associated with charge-sharing of charges generated by the sensor which are distributed over a neighbourhood of pixels and will limit the spatial resolution and lead to a distortion of the energy spectrum. In this thesis a mechanism to be implemented in the readout circuits is proposed in order to suppress the charge-sharing effects. The proposed architecture and its circuit implementation are evaluated with respect to circuit complexity (area) and power consumption. For a photon-counting pixel it is demonstrated that the complete pixel, with charge-sharing suppression mechanism, can be implemented using 300 transistors with an idle power consumption of 2.7μW in a 120nm CMOS technology operating with a 1.2V power supply.</p><p>The improvements in CMOS technology can also be used for increasing the range of applications for X-ray imaging sensors. In this thesis, an architecture is proposed for multiple energy discrimination, called color X-ray imaging. The proposed solution is the result of balancing the circuit complexity and the image quality. The method is based on color sub-sampling with intensity biasing. For three-level energy discrimination, that corresponds to color imaging systems for visible light with R, G, and B color components, the increase in circuit complexity will be only 20% higher than that for the Bayer method but results in significantly better image quality.</p><p>As the circuit complexity in the digital processing within each pixel is increased, the digitally induced noise may play an increasingly important role for the signal-to-noise ratio in the measurements. In this thesis an initial study is conducted regarding how the digital switching noise affects the analog amplifiers in the photon-counting pixel.</p> Fri, 10 Jul 2009 11:31:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-9327 Niklas Klinga The influence of fibre characteristics on bulk and strength properties of TMP and CTMP from spruce http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-9112 <p>This thesis is intended to contribute to an increased knowledge about the influence of fibre characteristics on bulk and strength properties of thermomechanical pulp (TMP) and chemithermomechanical pulp (CTMP) from spruce. It deals with laboratory sheet properties and how they are affected by the conditions during pressing and drying, e.g. pressure, temperature and what dryness level the sheets have once pressing and drying is terminated. Further on it deals with how sheet properties depend on fibre properties, such as fibre length, fibre flexibility and fibre surface characteristics. The thesis is part of a long term project with the goal of increasing bending stiffness of paperboard, hence bulk and internal bond strength properties are of main interest. Apart from standard methods (ISO, TAPPI and Rapid Köthen), sheets have been pressed and dried in a modified Rapid Köthen dryer which has the capacity to press the sheets at higher pressure compared to a standard Rapid Köthen dryer. The results illustrated that there are large differences in mechanical pulp sheet properties depending on how the sheets have been pressed and dried. The main factors contributing to the bulk and strength levels achieved are a combination of pressure, temperature and to what dryness level the sheets are pressed. Sheets made from stiff fibres sprung back more when only wet pressed, and appeared to be less sensitive to pressure than sheets made from flexible fibres. The situation was the other way around when sheets were pressed and dried until dry at high temperature; pulps with stiff fibres were affected more by temperature and pressure than pulps with flexible fibres. When looking at strength development with respect to what dryness level the sheets had been pressed at high temperature, the most interesting finding was that the increase in strength was not continuous, especially when looking at the Z-strength development for high freeness pulps and long fibre fractions. There was a distinct inflection of the strength-dryness curve when dryness reached a level of ~50% and the most important dryness interval for internal strength development was found between 50 and 80%. This result combined with the fact that most paper and board machines only press the sheet to ~50% dryness, before the sheet is fed into the drying section, show that much of the inherent strength potential of mechanical pulps is unexploited. There are commercial techniques for pressing to higher dryness levels available, such as Condebelt drying and press drying. These techniques have however only been implemented to a limited extent. Further research on pressing to higher dryness levels will in the future be continued at FSCN at Mid Sweden University. Pilot refining trials with HTCTMP from spruce showed that densification and strength development were achieved by two different mechanisms: by making fibres flexible with gentle high consistency refining (HC refining) or by reducing fibre length with intense low consistency refining (LC refining). It was found that a high bulk at a very high Z-strength was achieved with LC refining even though the fibre length was reduced and at extremely low energy input. The results showed that fibres with extremely high content of sulphonated lignin on surfaces with low degree of fibrillation bond well as long as the surfaces get into contact during pressing and drying. This can be achieved by either making fibres flexible or by reducing fibre length. LC post-refining of spruce HTCTMP was found to be a very interesting process concept for production of high quality pulps intended for paperboard at a very low total energy input of ~800 kWh/admt.</p> Mon, 8 Jun 2009 15:24:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-9112 Linda S. Karlsson Spatio-Temporal Pre-Processing Methods for Region-of-Interest Video Coding http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-51 <p>In video transmission at low bit rates the challenge is to compress the video with a minimal reduction of the percieved quality. The compression can be adapted to knowledge of which regions in the video sequence are of most interest to the viewer. Region of interest (ROI) video coding uses this information to control the allocation of bits to the background and the ROI. The aim is to increase the quality in the ROI at the expense of the quality in the background. In order for this to occur the typical content of an ROI for a particular application is firstly determined and the actual detection is performed based on this information. The allocation of bits can then be controlled based on the result of the detection.</p><p>In this licenciate thesis existing methods to control bit allocation in ROI video coding are investigated. In particular pre-processing methods that are applied independently of the codec or standard. This makes it possible to apply the method directly to the video sequence without modifications to the codec. Three filters are proposed in this thesis based on previous approaches. The spatial filter that only modifies the background within a single frame and the temporal filter that uses information from the previous frame. These two filters are also combined into a spatio-temporal filter. The abilities of these filters to reduce the number of bits necessary to encode the background and to successfully re-allocate these to the ROI are investigated. In addition the computational compexities of the algorithms are analysed.</p><p>The theoretical analysis is verified by quantitative tests. These include measuring the quality using both the PSNR of the ROI and the border of the background, as well as subjective tests with human test subjects and an analysis of motion vector statistics.</p><p>The qualitative analysis shows that the spatio-temporal filter has a better coding efficiency than the other filters and it successfully re-allocates the bits from the foreground to the background. The spatio-temporal filter gives an improvement in average PSNR in the ROI of more than 1.32 dB or a reduction in bitrate of 31 % compared to the encoding of the original sequence. This result is similar to or slightly better than the spatial filter. However, the spatio-temporal filter has a better performance, since its computational complexity is lower than that of the spatial filter.</p> Thu, 20 Dec 2007 11:17:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-51 Martin Holmvall Striping on flexo post-printed corrugated board http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-111 <p>Striping is the most common print non-uniformity on corrugated board. It is defined as periodic print density and/or print gloss variations parallel to the flutes. Corrugated boards are mainly printed with flexography, making striping a major concern for the flexographic post-printing industry. In spite of its long history, the basic mechanisms of striping have not been fully understood, and no concrete solution has been provided. The objective of this thesis is to obtain an understanding of the fundamental mechanisms behind and a solution to striping. Both experimental and numerical approaches have been taken in thiswork. Nonlinear finite element models have been constructed in both corrugated board and halftone dot scales to determine the pressure distributions in the printing nip. Ink transfer experiments have been performed to determine the print density vs. pressure relations. Parametric studies have been done for the effects of printing system variables and deformations. The results showed that striping is predominantly print density variations caused by pressure variations in the printing nip. The pressure variations are inherent to the corrugated board structure. Washboarding was shown to play a minor part in causing print density variations, but might contribute to gloss striping. A new printing plate design has been proposed to eliminate the pressure variations and hence the print density striping.</p> Mon, 18 Feb 2008 16:20:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-111 Claes Mattsson Fabrication and Characterization of Photon Radiation Detectors http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-37 <p>This thesis involves a study the fabrication and characterization of photon radiation detectors. The focus has been to develop and improve the performance of optical measurement systems, but also to reduce their cost. The work is based on the study of two types of detectors, the position sensitive detector and the thermal detector.</p><p>Infrared detectors are usually subcategorized into photonic detectors and thermal detectors. In the thermal detectors, heat generated from the incident infrared radiation is converted into an electrical output by some sensitive element. The basic structure of these detectors consists of a temperature sensitive element connected to a heat sink through a thermally isolating structure. Thin membranes of Silicon and Silicon nitride have been commonly used as thermally insulation between the heat sink and the sensitive elements. However, these materials suffer from relatively high thermal conductivity, which lowers the response of the detector. The fabrication of these membranes also requires rather advanced processing techniques and equipment. SU-8 is an epoxy based photoresist, which has low thermal conductivity and requires only standard photolithography. A new application of SU-8 as a self-supported membrane in a thermal detector is presented. This application is demonstrated by the fabrication and characterization of both an infrared sensitive thermopile and a bolometer detector. The bolometer consists of nickel resistances connected in a Wheatstone bridge configuration, whereas the thermopile uses serially interconnected Ti/Ni thermocouple junctions.</p><p>The position sensitive detectors include the lateral effect photodiodes and the quadrant detectors. Typical applications for these detectors are distance measurements and as centering devices. In the quadrant detectors, the active region consists of four pn-junctions separated by a narrow gap. The size of the active region in these detectors depends on the size of the light spot. In outdoor application, this spot size dependence degrades the performance of the four-quadrant detectors. In this thesis, a modified four-quadrant detector having the pn-junctions separated by a larger distance has been fabricated and characterized. By separating the pn-junctions the horizontal electric filed in the active region is removed, making the detector spot size insensitive.</p><p>Linearity of the lateral effect photodiodes depends on the uniformity of the resistive layer in the active region. The introduction of mechanical stress in an LPSD results in a resistance change mainly due to resistivity changes, and this affects the linearity of the detector. Measurements and simulations, where mechanical stress is applied to LPSDs are presented, and support this conclusion.</p> Fri, 23 Nov 2007 12:34:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-37 Stefan B. Lindström Simulations of the Dynamics of Fibre Suspension Flows http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-53 <p>A new model for simulating non-Brownian flexible fibres suspended in a Newtonian fluid has been developed. Special attention has been given to include realistic flow conditions found in the industrial papermaking process in the key features of the model; it is the intention of the author to employ the model in simulations of the forming section of the paper machine in future studies.</p><p>The model considers inert fibres of various shapes and finite stiffness, interacting with each other through normal, frictional and lubrication forces, and with the surrounding fluid medium through hydrodynamic forces. Fibre-fluid interactions in the non-creeping flow regime are taken into account, and the two-way coupling between the solids and the fluid phase is included by enforcing momentum conservation between phases. The incompressible three-dimensional Navier-Stokes equations are employed to model the motion of the fluid medium.</p><p>The validity of the model has been tested by comparing simulation results with experimental data from the literature. It was demonstrated that the model predicts the motion of isolated fibres in shear flow over a wide range of fibre flexibilities. It was also shown that the model predicts details of the orientation distribution of multiple straight, rigid fibres in a sheared suspension. Model predictions of the viscosity and first normal stress difference were in good agreement with experimental data found in the literature. Since the model is based solely on first-principles physics, quantitative predictions could be made without any parameter fitting.</p> Mon, 7 Jan 2008 12:51:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-53 Lena-Maria Öberg Traceable Information Systems : Factors That Improve Traceability Between Information and Processes Over Time http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-515 <p>Preservation of information is not a new issue but preservation of digital information has a relatively short history. Since the 60’s when computers began to be used within administration, digital information that has had to be preserved over time.The problem addressed in this research is how to preserve understandable information over time. Information is context dependent, which means that without context it is not possible to use the information. Process is one part of the context. And an important issue when preserving information is then to be able to trace an information</p><p>object to the process where in it has been created and managed. Associating information to a particular process creates the possibility of relating information objects to each other and also to the context in which the information has been created and used. The aim of this thesis is to identify and structure factors that can improve the traceability between information and processes over time. A set of factors based on case studies and a set of analytical methods are presented that can improve the traceability over time. These factors have been identified and structured by the use of the Synergy-4 model. They have been identified within four different spheres namely: competence, management, organization/procedure and technology. The factors have further been structured in three different time states namely: creation time, short and middle term and long-term. The research concludes that there are a lot of factors influencing ability to preserve information. Preservation issues include selection of metadata standards, organizational culture, lack of understanding from management and formalization of documents. The conclusion is that if an organization wants to succeed in preserving traceable information they have to build strategies that cover the issues from a range of different angles. This thesis suggests that crucial angles are competence, management, organization/procedure</p><p>and technology. Furthermore, the strategies must be in place at the stage of creationof the information objects.</p> Fri, 10 Jul 2009 12:05:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-515 Johan Larsson The effect of leadership values, behaviors and methodologies on quality and health http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-9330 <p>Syftet med den forskning som presenteras i denna avhandling är att bidra till ökad kunskap om hur ledarskap kan praktiseras för att främja både medarbetarnas hälsa och organisationers kvalitetsarbete. Tre forskningsfrågor är ställda: 1.Hur relaterar ledarnas värderingar till ledarnas beteende samt aspekter av hälsa och kvalitet?2.Hur relaterar ledarnas beteende till aspekter av hälsa och kvalitet?3.Hur relaterar ledarnas arbetssätt till aspekter av hälsa och kvalitet?Forskningen baserar sig på två fallstudier. Fallstudie I berör tre framgångsrika arbetsplatser som fått utmärkelsen Sveriges bästa arbetsplats. Fokus i denna fallstudie är huvudsakligen på de arbetssätt som används. Fallstudie II avser åtta arbetsplatser i Jämtland och det är särskilt värderingar, beteenden och arbetssätt som har studerats. Rörande ledarskap är Teori X och Teori Y använd som teoribas för att studera ledarnas värderingar. Den tredimensionella ledarbeteendeteorin (förändring, uppgift, relation) är använd när ledarnas beteende mäts och diskuteras. Både kvalitativa och kvantitativa metoder är använda i forskningen. Ledare med X-inriktning på ledarvärderingarna har lägre resultat när medarbetarna värderar organisationens kvalitetsaspekter och samtliga tre dimensioner rörande ledarbeteendet. Vissa indikationer finns att organisationer med Y-orienterade ledare har medarbetare med bättre hälsa. Den ledarskapsprofil som har de bästa resultaten rörande kvalitetsaspekter och hälsoutfall har höga värden på alla tre dimensioner med högsta värdet på relation, följt av förändring på liknande nivå samt även uppgiftsorienteringen hög men lägre än de andra två dimensionerna. Gemen-samma arbetssätt hos framgångsrika organisationer har identifierats och presenterats. Mönster mellan framgångsrika organisationer i studie I och II har identifierats.</p> Fri, 10 Jul 2009 11:56:00 +0200 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-9330