Announcement

Collapse
No announcement yet.

Wolverine-Hawkeye Telecine

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by Bruce Davis View Post
    Not a programmer so have not found a reason why the script gives me an error (searched but no luck).
    Line 6 Script error there is no function named "Convert"
    Where in the script would I need to place this extra line thanks:-
    Return sprocketAlign(film,8,100).ConvertToRGB24().imagewr iter(file="D:\movie\image%d.tiff",start=0,end=0,ty pe="tiff")
    i see some space errors

    function named "Convert" it´s because there is a space error like this Convert ToYV12() but it must be like this ConvertToYV12()

    Code:
    ImageSource("D:\Super8MOVIE\image%d.tiff",start=1,end=100,fps=18).ConvertToYV12()
    Code:
    imagewriter(file="D:\movie\image%d.tiff",start=0,end=0,type="tiff")

    i did notice that when i did copy and paste the script to this forum i did get a lot of space errors do not know why

    change Return sprocketAlign(film,8,100) to

    this


    Code:
    Return sprocketAlign(film,8,100).ConvertToRGB24().imagewriter(file="D:\movie\image%d.tiff",start=0,end=0,type="tiff")
    type="tiff" you can change to jpg if you whant jpg image sequence like this type="jpg" and change
    image%d.tiff to image%d.jpg

    more info here about imagewriter avisynth http://avisynth.nl/index.php/ImageWriter
    Last edited by Mattias Norberg; July 08, 2020, 10:34 AM.

    Comment


    • One potential issue with the sprocket holes is auto exposure. It will make the image darker. May be possible to compensate for by bumping up the auto reference slider. Alternatively, leave just a little bit of it showing enough for auto alignment but not enough to impact auto exposure.
      Bruce, I tried IC Measure and this sensor does not support multiple exposures.
      IC Measure HDR is the same as in IC capture. It does make a big difference.
      While playing with it discovered the following:
      Highlight reduction could help with very bright areas.
      Auto Max Value should be turned off and cranked up to the max.
      It limits the exposure to be able to maintain the FPS but that is not what we want.

      Comment


      • IC Capture and my camera i do not use auto exposure at all i have disable all auto things and gamma is at 100 default setting i only adjust Red Green Blue to get the white balance correct and that i adjust to the filmgate without the film in i mean i adjust to the Led lamp and i set the exposure to the max without clipping that the IC Capture Histogram shows when there is no film in the the filmgate and this setting i use for the Low exposure captures to

        for High exposure capture i only adjust the exposure time to higher

        i was not sure if i post this or not maybe pointless information this setting is when i do hdr two exposure capture
        Last edited by Mattias Norberg; July 08, 2020, 08:03 PM.

        Comment


        • It is a good point Mattias because even with newer cameras some people prefer to use manual exposure. In that case this is not an issue for them. For me it is an issue since I run in auto exposure most of the time. But leaving a narrow strip of the sprocket hole visible should do the trick.

          Comment


          • Originally posted by Stan Jelavic View Post
            It is a good point Mattias because even with newer cameras some people prefer to use manual exposure. In that case this is not an issue for them. For me it is an issue since I run in auto exposure most of the time. But leaving a narrow strip of the sprocket hole visible should do the trick.
            ok

            Comment


            • Hi Mattias, when using HDR you capture the S8 film twice, so you need to align all images perfectly, therefore Sprocket hole Alignment Script.
              Have had to download plugins for avisynth script to not error out (but I do not have it fully working yet) the frames in virtualdub2.6 are upside down, is that normal. Not sure if I downloaded the correct versions of the plugins.

              Hi Stan, will it be possible to take two snapshots of the same image with different exposures (automated) rather than running the film through twice, probably not but I thought I would ask. If not the next step would be the 5MP DFM 37UX264-ML Pregius sensor camera, but we do not know how good the HDR results would be.
              To get this result https://photos.google.com/share/AF1Q...9JM0J0TUFhc3NB how much difference in exposure + - from auto exposure did you use (if you can remember).

              Regards - Bruce

              Comment


              • Wow, a lot of information to digest here!

                No one needs an Adobe Monitor to see or use the extra color depth captured above 8bit. That being more than SRGB, 24bit color. It can be seen when lifting the values in the shadows or dropping them in the highlights. The 12bits from the UX226 might have more details available if there was more detail on the film.

                I feel a little daunted when faced with using a collection of executables with line commands and batch files. There is a lot to learn to be efficient and make comparisons. There is no doubt that Mattias' example looked good. I've also seen some examples of AVISYNTH removing dust, scratches, and jitter. Not sure if it is better or equal with say Neat Video. Or how much better the FAST Debayer is than IC. The Y800 (RG/Edge) does not look as good to me as the RGBxx debayer. Y16 has no option to debayer in IC. I've not succeeded with Fast Debayer yet.

                The HDR feature in IC does a great job of balancing tonal ranges. My guess would be that the best option is still to make at least 2 exposures for highs and lows to be blended....except for all the extra steps!

                Stan, you must be developing new options for the capstan. I don't think I have a "takeup" switch unless that is either REVERSE or REWIND. I wonder if the lady in the white dress knows of her celebrity? Both sides are looking better than previous comparisons! I'm not sure what you're doing, is it typical HDR from multiple exposures?

                Mattias, do you use a plugin to merge the 2 captures (high and low) for HDR, or just AVISYNTH alone?

                Bruce, I have similar questions about HDR with AVISYNTH. I know that DJI just released a new Drone with a Sony HDR sensor that takes 2 simultaneous exposures using adjacent pixel blocks. It has to be in real time for video. There is a hit in resolution. I'll make a note to get that sensor model.

                Comment


                • Originally posted by David Brown View Post

                  I feel a little daunted when faced with using a collection of executables with line commands and batch files. There is a lot to learn to be efficient and make comparisons. There is no doubt that Mattias' example looked good. I've also seen some examples of AVISYNTH removing dust, scratches, and jitter. Not sure if it is better or equal with say Neat Video. Or how much better the FAST Debayer is than IC. The Y800 (RG/Edge) does not look as good to me as the RGBxx debayer. Y16 has no option to debayer in IC. I've not succeeded with Fast Debayer yet.


                  Mattias, do you use a plugin to merge the 2 captures (high and low) for HDR, or just AVISYNTH alone?

                  Hi Fast Debayer can debayer Y16 to RGB48 i did tested it but i did no see any difference only the file did get bigger so i capture in Y800 and debayer to RGB24 with Fast Debayer

                  i do have a avisynth script that align low exposure image sequence capture with the High exposure image sequence capture and then does the Hdr merge and then it does the Sprocket Hole align i can post it here but i try to clean the script up little first

                  but now i do not need to align the Low and High image sequence i have fix now my DIY film scanner to capture the same frame twice Low and High exposure before the new frame comes i do that with arduino
                  in IC Capture i have the long exposure setting and with arduino i control the Led lights on and off time instead so i get Low and High exposure

                  here it´s the DIY film scanner loop

                  arduino drive the stepper motor and when the Hall sensor get triggered it send 5v to arduino interrupt pin then arduino immediately stops and disablels the stepper motor so it does not run then arduino turns on the led light then arduino sends 5v to my camera to trigger it to take a photo now in arduino i use a delay to get the right exposure and then arduino turns off the led light in arduino i have one more delay here before arduino turns the led light on again for the second exposure photo when it´s done then arduino enables the stepper motor so it starts to run again

                  and that is the loop

                  with this method i can take unlimited photos with different exposure before the next frame comes

                  my scanning speed is 2 fps x 2 photos so it´s like 4 fps

                  i use this version of avisynth https://forum.doom9.org/showthread.php?t=148782

                  i do have Neat Video 5 for virtual dub but avisynth denoise is better


                  this is how i did connect my 6w led to arduino https://www.youtube.com/watch?v=0mYwr933rz8

                  i do like this to separate Low and High image sequence https://www.youtube.com/watch?v=9ASgtBwPynk
                  Last edited by Mattias Norberg; July 09, 2020, 02:40 AM.

                  Comment


                  • Ok here is my full Align Avisynth script


                    Code:
                    SetMemoryMax(1920)
                    SetMTMode(5,2)
                    
                    
                    
                    
                    a=ImageSource("F:\Super8mm_Sound\LOW_exposure\image%d.ppm",start=1,end=3662,fps=18).ConvertToYV12()
                    b=ImageSource("F:\Super8mm_Sound\HIGH_exposure\image%d.ppm",start=1,end=3662,fps=18).ConvertToYV12()
                    
                    #a=ImageSource("F:\Super8mm_Sound\LOW_exposure\image%d.tiff",start=1,end=3662,fps=18).ConvertToYV12()
                    #b=ImageSource("F:\Super8mm_Sound\High_exposure\image%d.tiff",start=1,end=3662,fps=18).ConvertToYV12()
                    
                    
                    c=align(a,b)
                    
                    
                    
                    LowEx = c.SelectOdd()
                    HighEx = c.SelectEven()
                    
                    hdr_sprocket_not_align=overlay(HDR1(LowEx,HighEx), HDR2(LowEx,HighEx),x=0,y=0,mask=HDR1(LowEx,HighEx) ,opacity=1.0,greymask=true,mode="Blend",pc_range=true)
                    
                    hdr_sprocket_align=hdr_sprocket_not_align.sprocketAlign(65,3662)
                    
                    #return c # Use this to adjust the Low and High exposure so they look about the same
                    #return LowEx#.ConvertToRGB24().imagewriter(file="D:\image%d.tiff",start=0,end=0,type="tiff") #Use this to only output LOW exposure to image sequence that has been aligned (Or i think this is the same as original Low exposure because it´s only High exposure that gets align to the Low exposure not 100% sure )
                    #return HighEx#.ConvertToRGB24().imagewriter(file="D:\image%d.tiff",start=0,end=0,type="tiff") #Use this to only output High exposure to image sequence that has been aligned
                    return hdr_sprocket_align#.ConvertToRGB24().imagewriter(file="D:\image%d.tiff",start=0,end=0,type="tiff") #Use this to output image sequence that has been aligned and sprocket hole alignd and hdr merge
                    
                    
                    
                    
                    
                    
                    
                    #################...Functions...###################
                    
                    
                    function align(clip a,clip b)
                    {
                    c = Interleave(a, b)
                    
                    a_cont=9.214285
                    b_cont=0.08
                    
                    a_ref = a.NonlinUSM(1.2,2.6,6.0,8.5).HighlightLimiter(1,true,1,true,100).tweak(cont=a_cont).MT_binarize(threshold=80).greyscale().invert()
                    b_ref = b.NonlinUSM(50.2,2.6,6.0,8.5).HighlightLimiter(1,true,1,true,100).tweak(cont=b_cont).MT_binarize(threshold=25).greyscale().invert()
                    
                    c_ref = Interleave(a_ref, b_ref)
                    
                    # calculate stabilization data
                    mdata = DePanEstimate(c_ref,trust=0.01,dxmax=20,dymax=42)
                    
                    # stabilize
                    c_stab = DePanInterleave(c, data=mdata)
                    
                    b_stab = c_stab.SelectEvery(6, 2)
                    a_stab = c_stab.SelectEvery(6, 1)
                    f = Interleave(b_stab,a_stab)
                    #return StackHorizontal(a_ref ,b_ref ) # Use this to adjust the Low and High exposure so they look about the same
                    return f
                    }
                    
                    
                    
                    
                    
                    function HDR1(clip a_stab,clip b_stabb)
                    {
                    b_stab=b_stabb.TurnLeft().HDRAGC(max_gain =5.0,min_gain=0.5,coef_gain=2.0,coef_sat=1.25,MODE =2,shadows=true,protect=1,reducer=0,corrector=0.0) .TurnRight().ColorYUV(off_u=-12,gain_u=0)
                    
                    t=a_stab.TurnLeft().HDRAGC(max_gain = 3.0,min_gain=0.5,coef_gain=2.0,coef_sat=2.00,MODE= 2,shadows=true,protect=1,reducer=0,corrector=0.0). TurnRight()
                    
                    ab=overlay( b_stab,t,x=0,y=0,mask=b_stab,opacity=0.4,greymask= true,mode="Multiply",pc_range=true).ColorYUV(off_y =0,gain_y=30)
                    abc=overlay( b_stab,t,x=0,y=0,mask=b_stab,opacity=1.0,greymask= true,mode="Difference",pc_range=true)
                    ass=overlay(ab,abc,x=0,y=0,mask=invert(ab),opacity =1.0,greymask=true,mode="Blend",pc_range=true).ColorYUV(off_y=0,gain_y=0)
                    
                    ab1=overlay(t,ass,x=0,y=0,mask=t,opacity=0.15,greymask=true,mode="SoftLight",pc_range=true)
                    
                    l=overlay(ass,ab1,x=0,y=0,mask=ass,opacity=1.0,greymask=true,mode="Blend",pc_range=true).ColorYUV(off_y=0,gain_y=3)
                    ll=overlay(ab1,ass,x=0,y=0,mask=ab1,opacity=1.0,greymask=true,mode="Darken",pc_range=true).ColorYUV( off_y=-19,gain_y=22)
                    k=overlay(l,ll,x=0,y=0,mask=l,opacity=1.0,greymask =true,mode="Blend",pc_range=true)
                    
                    return k
                    }
                    
                    
                    
                    
                    function HDR2(clip a_stab,clip b_stab)
                    {
                    t=a_stab.coloryuv(autowhite=false).TurnLeft().HDRAGC(max_gain = 5.0,min_gain=0.5,coef_gain=2.0,coef_sat=2.00,MODE= 2,shadows=true,protect=1,reducer=0,corrector=0.0). TurnRight()
                    ab=overlay( b_stab,t,x=0,y=0,mask=b_stab,opacity=0.5,greymask= true,mode="Multiply",pc_range=true).ColorYUV(off_y =0,gain_y=32)
                    
                    ab1=overlay(t,b_stab,x=0,y=0,mask=t,opacity=0.5,greymask=true,mode="hardlight",pc_range=true)
                    l=overlay(ab,ab1,x=0,y=0,mask=ab,opacity=1.0,greymask=true,mode="blend",pc_range=true)
                    last=overlay(l,t,x=0,y=0,mask=ab,opacity=1.0,greymask=true,mode="blend",pc_range=true).TurnLeft().HDRAGC(max_gain = 1.5,min_gain=0.1,coef_gain=2.0,coef_sat=0.90,MODE= 1,shadows=true,protect=1,corrector=0.45).TurnRight ()
                    return last.ColorYUV(off_y=0,gain_y=22)
                    }
                    
                    
                    
                    
                    function sprocketAlign(clip c1,int frame,int time_frames)
                    {
                    c2=c1.Trim(frame,frame).loop(time_frames)
                    c = Interleave(c2, c1)
                    
                    r1=overlay(c2,c2,x=0,y=0,mask=c2,opacity=1.0,greymask=true,mode="HardLight",pc_range=true).greyscale ().NonlinUSM(1.2,2.6,6.0,8.5).invert()
                    a_ref=overlay(c2,r1,x=0,y=0,mask=c2,opacity=1.0,greymask=true,mode="Exclusion",pc_range=true).NonlinUSM(3.0,3.0,7.0,12.5).HighlightLimiter(1,true,1,true,100).invert().GaussianBlur(VarY=1).MT_binarize( threshold=1).greyscale().invert().crop(4,30,-1500,-30)
                    
                    
                    
                    r2=overlay(c1,c1,x=0,y=0,mask=c1,opacity=1.0,greymask=true,mode="HardLight",pc_range=true).greyscale ().NonlinUSM(1.2,2.6,6.0,8.5).invert()
                    b_ref=overlay(c1.flick(),r2,x=0,y=0,mask=c1,opacity=1.0,greymask=true,mode="Exclusion",pc_range=true ).NonlinUSM(3.0,3.0,7.0,12.5).HighlightLimiter(1,true,1,true,100).invert().GaussianBlur(VarY=1).MT_binarize(threshold=1).greyscale().invert().crop(4,30,-1500,-30)
                    
                    
                    c_ref = Interleave(a_ref, b_ref)
                    # calculate stabilization data
                    mdata = DePanEstimate(c_ref,trust=0.01,dxmax=0,dymax=150)
                    # stabilize
                    c_stab = DePanInterleave(c, data=mdata)
                    
                    b_stab = c_stab.SelectEvery(6, 2)
                    #return StackHorizontal(b_ref,a_ref) # use this to fix the Crop
                    return b_stab
                    }
                    
                    
                    
                    
                    
                    function flick(clip e)
                    {
                    o = e
                    sm = o.bicubicresize(88,64).grayscale() # can be altered, but ~25% of original resolution seems reasonable
                    smm = sm.temporalsoften(1,32,255,24,2).merge(sm,0.25)
                    smm = smm.temporalsoften(2,12,255,20,2)
                    o2 = o.mt_makediff(mt_makediff(sm,smm,U=3,V=3).bicubicresize(width(o),height(o),0,0),U=3,V=3)
                    return o2
                    }
                    
                    
                    
                    
                    function NonlinUSM(clip o, float "z", float "pow", float "str", float "rad", float "ldmp")
                    {
                    z = default(z, 6.0) # zero point
                    pow = default(pow, 1.6) # power
                    str = default(str, 1.0) # strength
                    rad = default(rad, 9.0) # radius for "gauss"
                    ldmp= default(ldmp, 0.001) # damping for verysmall differences
                    
                    g = o.bicubicresize(round(o.width()/rad/4)*4,round(o.height()/rad/4)*4).bicubicresize(o.width(),o.height(),1,0)
                    
                    mt_lutxy(o,g,"x x y - abs "+string(z)+" / 1 "+string(pow)+" / ^ "+string(z)+" * "+string(str)+
                    \ " * x y - 2 ^ x y - 2 ^ "+string(ldmp)+" + / * x y - x y - abs 0.001 + / * +",U=2,V=2)
                    
                    return(last)
                    }
                    
                    
                    function HighlightLimiter(clip v, float "gblur", bool "gradient", int "threshold", bool "twopass", int "amount", bool "softlimit", int "method")
                    {
                    gradient = default (gradient,true) #True uses the gaussian blur to such an extent so as to create an effect similar to a gradient mask being applied to every area that exceeds our threshold.
                    gblur = (gradient==true) ? default (gblur,100) : default (gblur,5.0) #The strength of the gaussian blur to apply.
                    threshold = default (threshold,150) #The lower the value, the more sensitive the filter will be.
                    twopass = default (twopass,false) #Two passes means the area in question gets darkened twice.
                    amount = default (amount,10) #The amount of brightness to be reduced, only applied to method=2
                    softlimit = default (softlimit,false) #If softlimit is true, then the values around the edges where the pixel value differences occur, will be averaged.
                    method = default (method, 1) #Method 1 is multiply, the classic HDR-way. Any other method set triggers a brightness/gamma approach.
                    
                    amount = (amount>0) ? -amount : amount
                    
                    darken=v.Tweak(sat=0).mt_lut("x "+string(threshold)+" < 0 x ?")
                    blurred= (gradient==true) ? darken.gaussianblur(gblur).gaussianblur(gblur+100) .gaussianblur(gblur+200) : darken.gaussianblur(gblur)
                    fuzziness_mask=blurred.mt_edge(mode="prewitt", Y=3, U=2, V=2).mt_expand(mode="both", Y=3, U=2, V=2)
                    multiply = (method==1) ? mt_lut(v,"x x * 255 /") : v.Tweak(bright=amount)
                    multiply = (method==1) ? eval("""(twopass==true) ? mt_lutxy(multiply,v,"x y * 255 /") : multiply""") : eval("""(twopass==true) ? multiply.SmoothLevels(gamma=0.9,smode=2) : multiply""")
                    
                    merged=mt_merge(v,multiply,blurred)
                    fuzzy= (softlimit==true) ? mt_merge(merged,mt_lutxy(v,merged,"x y + 2 /"),fuzziness_mask) : merged
                    return fuzzy
                    }
                    i put a picture about how Low and High exposure should look when the adjustment is good in the align() function

                    Click image for larger version  Name:	T_image.jpg Views:	0 Size:	213.1 KB ID:	13201

                    use the "return StackHorizontal(a_ref ,b_ref )" in align function to adjust so Low and High exposure look about the same
                    and use "return c " in line 25

                    adjust a_cont=9.214285 and b_cont=0.08 and MT_binarize(threshold=80) and MT_binarize(threshold=25) inside align() function
                    i have use the same settings for long time so when you get it to work then no need to adjust more i think

                    there was again many space errors but i think i have fix them all now

                    Code:
                    #you can use
                    
                    #return hdr_sprocket_align#.ConvertToRGB24().imagewriter(file="D:\image%d.tiff",start=0,end=0,type="tiff") #Use this to output image sequence that has been aligned and sprocket hole alignd and hdr merge
                    
                    #or
                    
                    return hdr_sprocket_not_align#.ConvertToRGB24().imagewriter(file="D:\image%d.tiff",start=0,end=0,type="tiff") #Use this to output image sequence that has been aligned and hdr merge but not sprocket hole alignd

                    did forget "return hdr_sprocket_not_align.sprocketAlign(65,3662)" and inside sprocketAlign() function enable #return StackHorizontal(b_ref,a_ref) # use this to fix the Crop like in the avisynth script i did post some days ago
                    when you get it to work then no need to adjust more

                    with the hashtag symbol # you enable or disable lines in the avisynth script

                    everything that comes after hashtag symbol # is only text not code so it's invisible for Avisynth so it does not care about it

                    inside align() function dxmax=20,dymax=42 must be higher if there is more jitter difference between Low and High exposure
                    dxmax=20 is 20 vertical line
                    dymax=42 is 42 horizontal lines
                    Last edited by Mattias Norberg; July 09, 2020, 01:46 PM.

                    Comment


                    • My stable Film scanner im scanning hdr here it capture the Low and High exposure at the same time before next frame comes https://www.youtube.com/watch?v=xnBGaFhk70Y

                      here is my not so stable film scanner it is scanning hdr to it capture the Low and High exposure at the same time before next frame comes
                      https://www.youtube.com/watch?v=IaJuNBdQgH8

                      maybe little off topic to post this

                      Comment


                      • Originally posted by Bruce Davis View Post
                        Hi Stan, will it be possible to take two snapshots of the same image with different exposures (automated) rather than running the film through twice,
                        It is possible. Double trigger is not a problem. But synchronizing the camera is more complex but still possible. I like Mattias's led modulation idea.
                        To test that the MSP code needs to be modified for double trigger and led control. For now could use the takeup output and modify it for 3.3V source instead of 12V source.
                        The MSP ISR would have to have an LED counter that pulses the led during timer interrupt (6 kHZ). The counter controls the duty cycle. A cap would need to be added to integrate the pulses.

                        Another idea is to bring the camera trigger back to he PC and then use a custom code to control the camera. Since it would be pretty hard to rewrite the whole IC Capture the ideas is to setup up everything with IC Capture first, close it and then run the HDR. If it is required to do periodic adjustments to a few params, the controls for that could be added to the HDR code.

                        The third option is using the UX264. The Haweye board camera mounting would have to be changed. Also lots more money.

                        Originally posted by Bruce Davis View Post
                        To get this result https://photos.google.com/share/AF1Q...9JM0J0TUFhc3NB how much difference in exposure + - from auto exposure did you use (if you can remember).
                        Can easily find out since the images are still in my PC. Generally increased the exposure to be able to see the clouds in bright sky and reduced the exposure for reasonable details in the shadows.

                        Comment


                        • Originally posted by David Brown View Post
                          Wow, a lot of information to digest here!
                          Stan, you must be developing new options for the capstan. I don't think I have a "takeup" switch unless that is either REVERSE or REWIND. I wonder if the lady in the white dress knows of her celebrity? Both sides are looking better than previous comparisons! I'm not sure what you're doing, is it typical HDR from multiple exposures?
                          No, no new options, it is the REWIND switch. My bad.
                          That lady must be pretty old now. Hope she is still alive and that she is reading our forum and enjoying the celebrity status.
                          Hmm, the right image was taken using our standard procedure but obviously with the new camera. The only difference is that I turned the highlight reduction on and the exposure limit off.
                          The left image was obtained using Mattias's script with lo and hi images.

                          Comment


                          • One of the cameras with multi frame capability is
                            https://www.oemcameras.com/dfm37ux250ml.htm

                            Taking big bucks now.
                            Looks like it is still available. OEMcameras support are checking it for me.

                            Comment


                            • This should be filed under "Interesting but not available for Hawkeye"!

                              Sony Quad Bayer released 2 years ago, now appearing in mobile phones. The drone is the DJI Mavic Air2.

                              https://www.dpreview.com/news/024978...d-bayer-design
                              ----------------

                              Thanks for the script Mattias. I'll be reading thru that and attempting to run it later.

                              Comment


                              • Reminds me of an old Fuji camera sensor where each pixel group had a smaller, low sensitivity, partner that could stop white level clipping.

                                Comment

                                Working...
                                X