Webinar Q&A: How to monitor laser system performance with power measurement (2nd edition)
Thursday, June 03, 2021
Thank you to all our participants to the live webinar we held on May 6th, 2021 with the Optical Society: “How to monitor laser system performance with power measurement” presented by Félicien Legrand, US/Canada Sales Manager at Gentec-EO.
It was great to engage with existing and new customers and we look forward to follow up on each of your needs.
The recorded webinar is accessible here for re-watch: https://www.gentec-eo.com/webinar/laser-power-measurement
The participants in the live event were very active and asked many questions. We tried to answer most of them during the event, but we also wanted to take the time to share the answers to all your questions. And to help you sort through them, we have re-grouped similar topics together.
Didn’t find the answer you were looking for? Maybe you’ll find it in the first edition.
Yes, this is our general guideline at the moment. But ultimately, like you implied, it depends on your usage of the detector. For example, if you don't use the detector 24/7 like we sometimes see in an industrial setting, then you won't need to send it back for recalibration as much. Also, a good time to know when it's time to recalibrate is when you see abnormal measurements and know for sure that these cannot be attributed to the laser source.
This could be instead because the detector has ''drifted'', as we call it, and needs recalibration to put it back in place. However, note that we have customers who send back their detectors once every 2 - 3 years, and we do not see a significant calibration drift upon recalibration, implying that the calibration was still valid. But some customers prefer to send back every 6 months, for example.
The answer is that you don't actually choose or decide the calibration factor. The calibration is something that is purely done by Gentec-EO and you don't have to consider anything when you do measurements except to select the right wavelength on the meter. After doing so, the system takes care of everything for you and will correctly read by itself the calibration factor of the detector, and will then show you a calibrated measurement.
The answer to this question is quite extensive but it's mostly summarized in this technical note on our website.
We invite you to check it out but if you want a quick rundown, the general idea behind calibration is that we calibrate each detector that we sell against a reference in our lab. This reference is calibrated by one of the internationally-recognized measurement standards laboratories, such as NIST. With this, we are able to define the sensitivity of each new detector to make sure that its measurements will match the reference, within uncertainties.
There is then another important part of calibration that involves checking the spectral response of the detector in question to then provide a spectral correction. This basically attributes a factor to each wavelength to make sure the measurement is correctly compensated when working at different wavelengths.
This is all done during the manufacturing of the detectors: you also don't even need to actually do anything special to access the calibrated measurements. You just plug the detector to a Gentec-EO meter and the latter checks all the calibration parameters by itself and takes care of everything.
Ideally, your beam should cover 40 to 60% of the detector area for safe, repeatable, calibrated measurements, but our detectors will work for beams that cover 10 to 80% of the detector area. A beam too small presents more risks to exceed the maximum power density supported by the detector and a beam that is too big will induce clipping (therefore offsetting measurements).
You need to know the size of the beam and the de-magnifying factor. If you make sure the setup is aligned and that the beam covers at most 80% of the detector, clipping should be negligible. A laser beam profiler or IR visualizer can help determine the beam diameter along the propagation axis.
The damage threshold is the main factor, but there are other advantages of filling the aperture. Bigger beams help the detector to provide repeatable measurements as the heat is distributed more equally across the absorber surface.
This actually depends on beam size, and also which model in particular we're going for. Our UP19K detectors (i.e. all detectors with 19 mm aperture diameter that you may have seen on our website) recently started having a new thermal disc design that really helps with spatial linearity.
With this new design, if the beam area is at least 10% of the aperture area of the detector, the effects should be minimal (about +/- 1% or less). If the beam area is less than 10% of the aperture area, the effects can be as high as +/- 3%. With the older models and the ones that do not have yet a new design (but will soon have, like UP55 models), the effects will be more noticeable.
If the beam is much larger than 10% in area, like 50% instead, then the effects on both newer models and old models should be minimal. Basically, the larger the beam area versus aperture area, the better to reduce these effects. The speed itself at which the beam moves will not have as much as an impact as the size of the beam itself versus the aperture area: that's the most important thing to look out for.
The answer would be that yes, this sort of observation is definitely possible. That is why it's important to have a beam that is larger than 10%, because we ultimately calibrate with such a beam size and have characterized the surface to not show such drastic measurement differences when the beam is at least 10% of the total aperture area.
Note here that it's indeed ''area'', not just ''diameter'': beam area should be at least 10% of detector aperture area. For example, this would correspond to about 6 mm diameter for a 19-mm diameter aperture.
The answer is that we usually have water-cooling modules or fan-cooling modules on our high-power solutions, but we also have models that offer typical convection cooling and that can still handle up to 10 kW.
Our high-power, fan/water-cooled solutions include UP55G-600F-HD-D0 and HP125A-15KW-HD-D0. Fan-cooled solutions can reach powers up to 600 W, but with water-cooling, we can go all the way up to 100 kW.
Our convection-cooled solutions include PRONTO-10K. This solution works differently than the ones previously mentioned as it can only handle this very high amount of power in a short burst. PRONTOs are specifically calibrated to measure after 5 seconds, with the same accuracy that would come on the fan-cooled or water-cooled solution.
So it really is a good way to measure high-power beams at a more affordable price, without having to consider plugging a fan or plugging in water. PRONTO exists in 4 forms: up to 500 W, 3000 W, 6000 W, and 10000 W.
The maximum peak power density that our detectors can handle varies from one model to another but just to give you an idea, our most commonly used power detectors have a broadband absorber that can easily take as high as MW/cm2, and some of our more specialized solutions can handle GW/cm2.
We usually prefer to look at damage thresholds in terms of energy density, J/cm2, which varies with pulse width. For example, our UP-QED series can handle up to 8 J/cm2 at 1064 nm, 7 ns, but for 1 fs, we estimate it to be around 0.1 J/cm2. It would be best to run the full specifications through us to find if our solutions can handle it.
First thing to note is that we cannot tell for sure for the laser itself what the effects of temperature and humidity are: it would be best to ask the laser manufacturer. For the detectors, we usually provide the following guideline: an operating temperature of 15 to 28°C and a relative humidity not exceeding 80%, storage 10 to 65°C and relative humidity not exceeding 90%.
If the room temperature changes, then it could cause a slight offset in the measurement. If possible, you should do a new zeroing when the temperature room changes: this will correct the offset to be zero at the new room temperature.
If the temperature change does not occur too quickly, it will not affect the measurements. ''Too quickly'' here would mean: in a few seconds (for example, 5 seconds). If the room temperature changes from 22 °C to 26 °C in, say, 3 minutes, that should not show on your measurements. Again, this is only valid as long as the before-after temperatures are still within the operating temperature range, which is 15 °C to 28 °C.
The answer is that a pyroelectric detector would likely not measure the energy correctly. A pyro crystal requires a change in temperature to work correctly and make measurement, hence why they do not work for CW lasers. As such, we would expect that the detector is unable to correctly integrate each of the pulse because of the constant CW stream of power that affects its temperature.
We have seen this type of laser before and it's always a challenge: we often have to go instead with a power detector, which would correctly measure the total power of both the CW background and stream of pulses.
That's a very good question and I subject we don't mention often but yes, our power detectors can in fact be used in a vacuum setting, but they need to be modified and customized a little bit to be suitable for these conditions. We usually replace some components that would outgas in a vacuum, among other things.
We would just need to know more about your application and whatnot to find exactly what would work best, then we apply the ''vacuum treatment'' on the detector, as we call it, and we can quote it!