04/02/2023

This week, we continue our four(and a half)-part (I, II, III, IVa, IVb) look at pre-modern iron and steel production. Last week, we looked at how a blacksmith reshapes our iron from a spongy mass c…

This week, we continue our four(and a half)-part (I, II, III, IVa, IVb) look at pre-modern iron and steel production. Last week, we looked at how a blacksmith reshapes our iron from a spongy mass called a boom first into a more workable shape and then finally into some final useful object like a tool. But as we noted last week, the blacksmith doesn’t just need to manage the shape of the iron, but also its hardness and ductility.
As we’ll see this week, those factors – hardness and ductility (and a bunch of other more complex characteristics of metals which we’re going to leave out for simplicity’s sake) – can be manipulated by changing the chemical composition of the metal itself by alloying the iron with another element, carbon. And because writing this post has run long and time has run short, next week, we’ll finish up by looking at how those same factors also respond to mechanical effects (work hardening) and heat treatment.
As always, if you like what you are reading here, please share it; if you really like it, you can support me on Patreon. And if you want updates whenever a new post appears, you can click below for email updates or follow me on twitter (@BretDevereaux) for updates as to new posts as well as my occasional ancient history, foreign policy or military history musings.
What Is Steel?
Let’s start with the absolute basics: what is steel? Fundamentally, steel is an alloy of iron and carbon. We can, for the most part, dispense with many modern varieties of steel that involve more complex alloys; things like stainless steel (which add chromium to the mix) were unknown to pre-modern smiths and produced only by accident. Natural alloys of this sort (particularly with manganese) might have been produced by accident where local ores had trace amounts of other metals. This may have led to the common belief among ancient and medieval writers that iron from certain areas was superior to others (steel from Noricum in the Roman period, for instance, had this reputation, note Buchwald, op. cit. for the evidence of this), though I have not seen this proved with chemical studies.
So we are going to limit ourselves here to just carbon and iron. Now in video-game logic, that means you take one ‘unit’ of carbon and one ‘unit’ of iron and bash them together in a fire to make steel. As we’ll see, the process is at least moderately more complicated than that. But more to the point: those proportions are totally wrong. Steel is a combination of iron and carbon, but not equal parts or anything close to it. Instead, the general division goes this way (there are several classification systems but they all have the same general grades):
Below 0.05% carbon or so, we just refer to that as iron. There is going to be some small amount of carbon in most iron objects, picked up in the smelting or forging process.From 0.05% carbon to 0.25% carbon is mild or low carbon steel.From about 0.3% to about 0.6%, we might call medium carbon steel, although I see this classifcation only infrequently.From 0.6% to around 1.25% carbon is high-carbon steel, also known as spring steel. For most armor, weapons and tools, this is the ‘good stuff’ (but see below on pattern welding).From 1.25% to 2% are ‘ultra-high-carbon steels’ which, as far as I can tell didn’t see much use in the ancient or medieval world.Above 2%, you have cast iron or pig iron; excessive carbon makes the steel much too hard and brittle, making it unsuitable for most purposes.
This is a difficult topic to illustrate so, since the internet is for cat pictures, via the British Museum, here is a Ming Dynasty cast-iron statuette of a cat, 15th or 16th century. Cast iron production was discovered much earlier in China than in most of the rest of the world, but cast iron products were brittle and not generally suitable for demanding use.
I don’t want to get too bogged down in the exact chemistry of how the introduction of carbon changes the metallic matrix of the iron; you are welcome to read about it. As the carbon content of the iron increases, the iron’s basic characteristics – it’s ductility and hardness (among others) – changes. Pure iron, when it takes a heavy impact, tends to deform (bend) to absorb that impact (it is ductile and soft). Increasing the carbon-content makes the iron harder, causing it to both resist bending more and also to hold an edge better (hardness is the key characteristic for holding an edge through use). In the right amount, the steel is springy, bending to absorb impacts but rapidly returning to its original shape. But too much carbon and the steel becomes too hard and not ductile enough, causing it to become brittle.
Compared to the other materials available for tools and weapons, high carbon ‘spring steel’ was essentially the super-material of the pre-modern world. High carbon steel is dramatically harder than iron, such that a good steel blade will bite – often surprisingly deeply – into an iron blade without much damage to itself. Moreover, good steel can take fairly high energy impacts and simply bend to absorb the energy before springing back into its original shape (rather than, as with iron, having plastic deformation, where it bends, but doesn’t bend back – which is still better than breaking, but not much). And for armor, you may recall from our previous look at arrow penetration, a steel plate’s ability to resist puncture is much higher than the same plate made of iron (bronze, by the by, performs about as well as iron, assuming both are work hardened). of course, different applications still prefer different carbon contents; armor, for instance, tended to benefit from somewhat lower carbon content than a sword blade.
It is sometimes contended that the ancients did not know the difference between iron and steel. This is mostly a philological argument based on the infrequency of a technical distinction between the two in ancient languages. Latin authors will frequently use ferrum (iron) to mean both iron and steel; Greek will use (sideros, “iron”) much the same way. The problem here is that high literature in the ancient world – which is almost all of the literature we have – has a strong aversion to technical terms in general; it would do no good for an elite writer to display knowledge more becoming to a tradesman than a senator. That said in a handful of spots, Latin authors use chalybs(from the Greek ) to mean steel, as distinct from iron.
More to the point, while our elite authors – who are, at most dilettantish observers of metallurgy, never active participants – may or may not know the difference, ancient artisans clearly did. As Tylecote (op. cit.) notes, we see surface carburization on tools as clearly as 1000 B.C. in the Levant and Egypt, although the extent of its use and intentionality is hard to gauge to due rust and damage. There is no such problem with Gallic metallurgy from at least the La Tène period (450 BCE – 50 B.C.) or Roman metallurgy from c. 200 B.C., because we see evidence of smiths quite deliberately varying carbon content over the different parts of sword-blades (more carbon in the edges, less in the core) through pattern welding, which itself can leave a tell-tale ‘streaky’ appearance to the blade (these streaks can be faked, but there’s little point in faking them if they are not already understood to signify a better weapon). There can be little doubt that the smith who welds a steel edge to an iron core to make a sword blade understands that there is something different about that edge (especially since he cannot, as we can, precisely test the hardness of the two every time – he must know a method that generally produces harder metal and be working from that assumption; high carbon steel, properly produced, can be much harder than iron, as we’ll see).
Via the British Museum, the so-called ‘Sword of Tiberius,’ a Mainz-type Roman gladius from the early imperial period (c. 15 AD). The sword itself has a mild steel core with high carbon steel edges and a thin coating of high-carbon steel along the flat. Almost certainly the higher carbon edge was welded on to the mild steel core during manufacture, an example of a blacksmith quite intentionally using different grades of steel.
That said, of course our ancient, or even medieval, smiths do not, of course, understand the chemistry of all of this. Understanding the effects of carbuzation and how to harness that to make better tools must have been something learned through experience and experimentation, not from theoretical knowledge – a thing passed from master to apprentice, with only slight modification in each generation (though it is equally clear that techniques could move quite quickly over cultural boundaries, since smiths with an inferior technique need only imitate a superior one).
Making Steel
Now, in modern steel-making, the main problem is an excess of carbon. Steel, when smelted in a blast furnace, tends to have far too much carbon. Consequently a lot of modern iron-working is about walking the steel down to a usefully low amount of carbon by getting excess carbon out of it. But ancient iron-working approaches the steeling problem from exactly the opposite direction, likely beginning with something close to a pure mass of iron and having to find ways to get more carbon into that iron to produce steel.
So how do we take our carbon and get it into our iron? Well, the good news is that the basic principle is actually very simple: when hot, iron will absorb carbon from the environment around it, although the process is quite slow if the iron is not molten (which it never is in these processes). There are a few stages where that can happen and thus a few different ways of making steel out of our iron.
The popular assumption – in part because it was the working scholarly assumption for quite some time – is that iron can be at least partially carburized by repeatedly being reforged. Experimental efforts to replicate this suggest that this is not true (note Craddock, op. cit., 252 on the arguments). The first problem is time: carbon absorption for hot-but-solid iron (like an iron-bar in the forge) is relatively slow, often taking hours (one experiment suggests about three hours to completely steel a 3mm thick piece of iron, with thickness increasing the time required non-linearly). But irons are generally left in the forge fire only for minutes, which would mean that even if any carburization did take place, it would have penetrated only an extremely thin layer of the iron. Meanwhile, simply leaving an iron in the forge for a prolonged time is also a bad idea, as it will cause the iron to burn unless the forge is kept at a lower temperature (which would in turn mean not using it for regular forge work in the meantime) or all oxygen is excluded (more on that in a second). So at best, the forge fire is going to provide only an extremely thin coating of steel over a bar of iron – something like 0.03mm.
The problem with trying to make up this problem by just going through the forging process over and over again is that you also have two different sources of decarburization. The first is the air. As we saw in our discussion of the roasting process, if you heat up iron – either metal or ore – in an environment with lots of oxygen (O2), that oxygen molecule will tend to grab spare carbon to make carbon dioxide (CO2). That’s still true with our carburized iron that has been heated up for forging. But since our smithy has to be an oxygen rich atmosphere, on account of our smith’s need to breath, some of that carbon will get pulled out of the outermost layer of the iron. Worse yet, that oxygen is also going to oxidized (that is, rust) that outer layer, which – as we discussed last time – that rust will get dislodged during hammering as hammer scale. As a result, careless forging can actually decarburize the edges of a piece of iron and metallurgical tests on some ancient weapons have seen some evidence that this did happen, where carbon content in the edge was lower than in the core (which is, to be clear, not a desirable situation)!
Fundamentally, our problem here is oxygen. Oxygen makes the iron burn in the forge, it causes oxidation in the iron and it steals away our free carbon to form carbon dioxide. So in order to get our carbon into our iron in quantity, we need to look for ways to get the iron hot, in a carbon-rich environment, with little to no oxygen present. That leaves two ideal phases for steeling:
First, steeling in the bloom. After all, we already have a stage of iron production where creating an oxygen starved environment was crucial. Can we get our carbon into our bloom during the smelting process? The answer is yes; if the ratio of charcoal to iron ore is tilted heavily enough in charcoal’s favor, the end result, once the charcoal has burned down, will be a steel bloom. This seems to have been the case in some traditional African bloomery traditions (Craddock, op. cit. 236) and the Japanese Tatara-buki process (Sim & Kaminski, op. cit. 59). Some Iron Age European finds have also been interpreted this way, but my understanding is that there are still many questions here; the documentary evidence provides, as I understand it, no support for widespread use of the bloomery method in Europe.
Alternately, the carbon can be introduced after the iron has been formed into a bar in a process known as cementation (also called case hardening or forge hardening, although the phrase ‘case hardening‘ can also mean effectively ‘surface hardening’ making it an imprecise term). Once iron is heated above roughly 900°C (or, in visual terms, a ‘red heat’), it will begin to absorb carbon if kept in contact with a source of carbon in an oxygen-starved environment. And we actually have a fair amount of attestation as to how this would be done from the medieval period (see Craddock, op. cit. 252).
First, the iron bars (having been smelted into a bloom, then forged into bars) were wrapped or surrounded in carbon-rich materials, which might be charcoal itself, or else plants, hoofs, horn or leather, and then sealed inside of a ceramic casing. That casing was then heated to the correct temperatures (because the interior of the case is oxygen deprived, there is minimal risk of ‘burning’ the iron, so going ‘high’ on the temperature is less of a threat) and held at that temperature for several hours while the iron absorbed the carbon. The iron bars used were often intentionally quite thing (1-2cm thickness) to allow for more rapid carburization. The result, sometimes called blister steel, might have carbon contents up to 2%, depending on how thorough the cementation process was; doubtless long practice led smiths to get a sense for exactly how long and at what heat a given amount of iron should be treated to produce the desired levels of carbon.
What is clear is that in both cases, using bloomery processes of cementation, that the fuel and time required made the resulting steel expensive; Tylecote (op. cit. 278) notes that steel in the medieval period often commanded around four times the price of iron. Consequently, we tend to see steel and iron objects in use, side by side, from the beginning of the European Iron Age onward (Craddock, in particular, has examples). Just like how iron was generally only used over cheaper materials like wood, stone and leather when the job demanded a lot of material toughness at low weight, so steel (especially steel of higher equality) was generally only used in place of iron when the job demanded extreme performance. But of course, not all parts of even a single object demand exactly the same properties, which brings us to:
Pattern Welding!
As noted above, it was most efficient to carburize fairly thin rods of iron, since the carbon was absorbed through the outermost layer of the iron. Moreover, the process of making steel through carbon absorption, either in the bloom or through cementation often leaves the carbon levels throughout the iron somewhat uneven, with more carbon in the outer layers and less in the core.
One way to manage this, particularly in the production of practical tools was ‘steeling.’ We actually last week an axe-head produced through a method designed to permit steeling. In a steeled blade or tool, the core of the tool is forged in iron (perhaps lightly carburized) and then, near the end of forging, the business end (blade, hammer-surface, pick-point, etc. – whatever needs the most hardness, generally) is forge-welded with a piece of steel, making a single piece of metal bonded strongly together but with different carbon-counts in different areas. This can be done a number of ways; the steel might be used as a core and the iron body welded around it and then filed away leaving the steel exposed (more common, I believe, with axes – we saw this method last week. In other cases, a steel edge might by wrapped or layered over an iron core.
Via Wikipedia, a close up of a Moro Barung (a type of sword from the Philippines) showing the ‘streaky’ pattern produced when a pattern welded blade is etched and polished to bring out the welds between the different parts of iron.
If the goal instead was to create a more homogeneous steel, the solution was ‘piling‘ (sometimes inaccurately referred to as ‘damascening’). The steel bar is drawn out into a fairly thin rod, then folded back and fire-welded into itself, often repeatedly, to create a more homogeneous steel. Though it is mostly now a thing of the past, where quite some time there was a pervasive popular belief that this particular method was unique to Japan; in any event, it was not. The downside, of course, was the time and labor demanded, compounded by the fact that repeated fire-welding meant repeated material loss to oxidation and ejection, especially since, after several pilings, the amount of slag to be ejected was likely to be quite low.
More complex is pattern welding, which marked some of the highest quality blades in much of the world until the early modern period (with exceptions for things like Wootz steel, which is not pattern welded, but confusingly sometimes equated with pattern welded steel under the confusing term ‘Damascus Steel’ which you will note I effort to avoid entirely). In the basic pattern welding method, we begin with a thin rod of bar of carburized iron. This is then piled and drawn repeatedly to create a laminated rod of iron with relatively more homogeneous carbon-content. Then two or more such rods are twisted and then welded together to produce a strong steel core. Generally then a blade – often more full carburized to maximize its hardness (since harder metal holds a sharp edge better) – is welded on to the core to make the final object.
Via the British Museum, an X-ray of the Sutton Hoo sword, a pattern welded sixth century early English sword. The X-ray brings out the wave-patterns of the pattern welding from beneath the rust and damage which would otherwise obscure them.
Pattern welding was intensive in both time and fuel and consequently was reserved generally for valuable prestige items. For iron, this almost always meant the blades of weapons, particularly (though not exclusively) swords; pattern welded knives, hammers and spear-heads exist, but are less common. Part of the prestige value must have been the high performance of weapons made this way, but it cannot have hurt that such weapons, if polished and etched, clearly displayed the patterns of the welds and there is evidence that they were kept in this state. Pattern welding is an ancient technique – some Middle and Late La Tène are exquisitely pattern welded – which in Europe continues through the Roman period and into the middle Ages, although it is somewhat less common (as I understand it) in the Early Middle Ages as compared to either the Roman period or the High Middle Ages. The art never seems to have been ‘lost,’ though the greater availability of either imported Wootz or larger and more homogeneously carburized locally made steel blooms (using the bloomery process rather than cementation) seems to have caused European sword manufacture to shift away from pattern welding later in the Middle Ages, essentially because it became no longer necessary to ensure a blade of sufficient quality.
Of course, pattern welding could be ‘faked’ by going through the final steps (twisting, welding and attaching the blade) without the former steps or even properly carburizing the iron. Sometimes these blades – pattern welded using low-carbon or even no-carbon iron – are taken to mean that the role of the carbon or the quality of the metal was not understood. I do not think this is the case, given that often the carbon content of high quality blades, even as early as the Roman period, seem very deliberately distributed. Bishop and Coulston (Roman Military Equipment (2006), 242) features a chart (not in the public domain, so I won’t reproduce here) which shows the carbon-content of a number of Roman gladii as a cross-section; several have high carbon (hard) edges and lower carbon (soft) cores, which is exactly what you would want in a sword (and coincidentally also how the highest quality Japanese katana were made, though I should note that these gladii are some 1300 years older than the oldest katana).
Intermission
This was intended to be one long post, but the demands of time have led me to split it here. Next time, we’ll look at the other tools that a blacksmith has to control the characteristics of his iron: work hardening and heat treatment (which is to say hardening, tempering and quenching).