2.1 Introduction: Background for FerroAlloys

2.1 Introduction: Background for FerroAlloys

BACKGROUND FOR FERROALLOYS

DEVELOPMENT AND PRODUCTION

The history of ferroalloys is relatively short compared to bronze or iron development. Ancient iron artifacts investigated are mostly fairly pure iron, containing carbon as the only alloying element. Carbon control by carburizing/ decarburizing treatments was traditionally understood by blacksmiths and was used throughout the “Iron Age” to adjust steel properties. Steel produced via the direct reduction route in bloomeries remained naturally unalloyed because iron oxide was reduced at such low temperatures that iron was formed in the solid state. Other components like manganese and silicon, which are typical in modern steels, were found only as natural impurities, yet mainly in slag inclusions in steel. When higher shaft furnaces were developed from bloomeries, stronger air blasting through tuyeres was needed. Temperature in the combustion zones was increased, and the iron formed dissolved more carbon and melted: thus, a blast furnace process was discovered.

This development happened in the late Medieval Age in Central Europe. The product was carbon-saturated cast iron that typically contained a small percentage of silicon and eventually also some manganese, depending on the ore composition. Pig iron from blast furnaces was used as foundry iron for castings or converted to steel by difficult and time consuming refining process. These processes were gradually developed, but steel from bloomeries kept its dominance until the 19th century.

In the early 19th century, two main methods were used to refine hot metal from blast furnaces to steel. They were puddling with an oxidizing flame in a reverberator furnace and the crucible process in which iron oxide (ore, scale) was added into hot metal to react with carbon and to get low-carbon steel (Flemings and Ragone, 2009; Tylecote, 1984).

Principally some alloying would have been possible, but several prerequisites had to be met before rational alloying could be carried out.

First, the breakthrough inventions in chemistry during the end of the 18th and early 19th centuries with the discovery of elements (like nickel, oxygen, manganese, chromium, molybdenum, and silicon from 1751 to 1824) and the understanding of chemical reactions like combustion/oxidation and reduction made it possible to recognize essential events of contemporary iron and steelmaking processes and to start developing new processes (Engels and Nowak, 1983).

Second, there should be some evidence of beneficial influences of additions on steel properties, which requires an understanding of steel microstructure and the influencing mechanisms of alloying elements.

Third, it should be possible to produce potential alloying materials at a reasonable price. During the last half of the 1800s, these prerequisites gradually began to be fulfilled.

In steelmaking the decisive breakthrough was the invention of the converter process by Henry Bessemer in 1855. He realized that oxygen from air blown inside the carbon-rich hot metal burned the carbon dissolved in iron melt. He also succeeded in developing a proper reactor and technology for the Bessemer process. Bessemer’s steel converter was, however, lined with acid silica refractory. Even if it could operate at temperatures up to 1600_C, the lining life was short. Because of the acid environment, the low-basicity slag was unsuitable for phosphorus removal. This was a big problem at that time in Great Britain, where plenty of P-bearing iron ores were available. The problem was solved when S.G. Thomas and P.C. Gilchrist succeeded in developing basic doloma lining, introduced in 1878–1879 (Barraclough, 1990).

Doloma is a calcia-magnesia mixture obtained by burning natural dolomite ((Ca, Mg)CO3) mineral. Basic Thomas converters gradually replaced acid Bessemer converters. Also the open-hearth process (involving a reverberator furnace heated by flame), developed by the Siemens and Martin brothers in 1860–1870, which started with acid lining, adopted basic lining and the new steelmaking practice as well. Thus, basic lining was available for electric furnaces when they started to be used for steelmaking in the beginning of the 1900s and for the production of ferroalloys too.

In parallel with Bessemer’s process development, Scottish metallurgist Robert Mushet discovered a way to add manganese-containing spiegeleisen in liquid steel to “kill” it. The term killing stemmed from the prevention of steel melt “boiling” (actually carbon-oxygen reaction and formation and removal of CO gas, which appears as boiling). Certain additions like spiegeleisen were observed to eliminate this “wild” state and to “kill” the steel. Manganese was thus first used for steel DE oxidation. Its beneficial effect to avoid hot shortness by binding excess sulfur was soon recognized (Tylecote, 1984).

Spiegeleisen was already produced in the 18th century in blast furnaces, containing 8% to 15% Mn and ~5% C. Mushet was also one of those who developed the first “tool steels” with 1% to 2% Mn and high tungsten content in the 1860s. Robert Hadfield invented the first high-manganese steel in the 1880s by introducing a work hardening steel with 11% to 14% Mn and 1% C (Tylecote, 1984). This steel grade has remained principally unchanged and still has a firm position in impact- and wear-resistance type working applications (e.g., in railway crossings).

At this time, it also became evident that alloying or modifying steel with pure elements is not economical. Further, it might be technically challenging or impossible (e.g., dissolution of metallic tungsten in a steel melt would take an enormously long time).

At that time metallurgists started to consider the addition of alloying elements to steel in the form of a ferroalloydan alloy of iron with at least one other element (except carbon, which is as cementite, Fe3C, or graphite in cast irons). Small-scale production of ferroalloys began in the 1860s with the use of the crucible process. Chromium or manganese ore was reduced by coal in graphite crucibles, which were heated to high temperatures to get liquid high-carbon ferroalloy (~25% Cr). High-Carbon ferromanganese production was also started in a blast furnace by the French Terre Noire Co in 1877 with 80% Mn and 6% to 7% C. It was also demonstrated that FeSi could be produced in a blast furnace, as could low-content FeTi and FeV ferroalloys. On the other hand, production of FeCr in the same way was found difficult because of the high melting point of the slag formed during smelting (Volkert and Frank, 1972). When electric furnace technology was introduced at the end of the 1800s, electric smelting of ferroalloys gradually progressed in the early 1900s, and today all ferroalloys that require furnace technology are produced exclusively in electric furnaces.

The occurrence of silicon in iron has its historical origin in blast furnace iron making. Advisedly or by accident, relatively high Si contents (several percents) were obtained in pig iron depending on the blasting practice, blast furnace burden material, charcoal, and other factors. Generally, high silicon in cast iron promotes graphite formation and thus improves ductility.

In the early 1900s, however, the exact explanation was not known, and the knowledge of how to produce good-quality castings was more art than science. In the early 1800s, Swedish chemist Jacob Berzelius produced a kind of ferrosilicon by crucible reduction. He also succeeded in separating elemental silicon, which he called “silicium,” in 1824 (Engels and Nowak, 1983). Berzelius observed that silicon burned in air and formed silica (silicon oxide). The production of elemental silicon turned out to be very difficult, so ferrosilicon was produced instead. It was possible to produce Si-containing hot metal in blast furnaces up to 20% Si in the late 19th century. The product was used for steel de-oxidation and alloying. When the electric furnace was developed at the turn of the century, the production technology for FeSi was also developed.

Smelting in an electric furnace and reduction with carbon (coke, charcoal, coal) thus constitute the method on which production of most ferroalloys was founded and special furnace constructions and technologies were developed. In ferroalloys production furnaces are designed to operate in submerged arc mode (submerged arc furnace [SAF]), in which the high resistivity of the charge is utilized for smelting.

The processes described so far are based on carbo-thermic reduction operating at high temperatures. Sinter or pellets prepared from concentrates and eventually lumpy ore are reduced by coke to form the ferroalloy. The heat for the reactions is generated by electric arcs (with plasma temperatures ~18 000_ to 20 000_C) formed between the tips of the electrodes in the furnace. The temperatures of materials near the arc zone may reach ~2800_ to 3000_C,whereas the temperatures in the main reaction zone are typically around 1600_ to 1800_C. Because of the strongly endothermic reduction reactions in the smelting process of a ferroalloy, huge amounts of electricity are consumed. In modern FeCr and FeMn processes, typical electricity consumptions are 3000 to 3500 kWh/t ferroalloy, and in the FeSi process typical electricity consumption can reach up to 7000 or 8000 kWh/t FeSi for a high-silicon (75% Si) ferroalloy.