other
75 TopicsLRS xAPI Data POST
We have our LRS stood up and Articulate Storyline has the connection in the presentation and the connection is being accepted by our LRS. The issue we are seeing is that it's only sending OPTIONS requests and not sending the POST request for the xAPI actions/engagements we have set in the presentation. Does anyone know why this would be happening and how to correct it? The OPTIONS request is returning the 204 as expected so it's not failing, and the presentation looks to be sending the OPTIONS for each of the engagements, but not a POST.Solved59Views0likes7CommentsCourse not restarting where i left of when revisiting.
Hi, I’ve published a course and am testing how it resumes in the LMS. But whenever I close the course and click “resume,” it always takes me back to the same slide no matter which slide I closed on. It’s odd because I’ve checked all my triggers, and nothing should be forcing it to return to that slide. Has anyone else encountered this issue or have any ideas on what might be causing it?Solved32Views0likes4CommentsXLF Version 2.1.
I have subscribed to the Advance version of DeepL as a translating tool. DeepL requires an XLF 2.1. version for translation but Rise 360 only export in version 1.2. Has anyone been able to solve the problem when exporting for translation? Can Articulate update Rise export XLF files for translation to a 2.1. version? ThanksSolved519Views5likes67CommentsViewing Articulate 360 Content in SharePoint Online
Enabling Custom Scripts in SharePoint Online Custom scripts are now disabled in SharePoint Online for security reasons by default. As a result, Articulate content with the story.html file renamed to story.aspx in the published output that previously worked with SharePoint Online might not work anymore. If you don't need to track learners' progress or results, check out these free or low-cost options for web hosting. If you need to use SharePoint Online, your SharePoint admin may be able to resolve this issue by following the steps below. (Note that we don't provide support for either workaround.) As of July 10, 2024, SharePoint users will find that they are required to reenable the Custom Scripts feature every 24 hours, reverting the Custom Scripts setting to its default and disabled state. Please see Microsoft’s article on Custom Script settings for more information. Enabling Custom Script via the SharePoint Admin Center If you don't need instant access, follow these steps. Go to the SharePoint admin center and sign in with your credentials. In the sidebar to the left of the page, click Settings. (If you're using the Modern admin center, click the classic settings page hyperlink at the bottom of the Settings page.) Scroll to the Custom Script section, then select the options to Allow users to run custom script on personal sites and Allow users to run custom script on self-service created sites. Click OK to save your changes. Note that this change may take up to 24 hours to appear. Enabling Custom Script in SharePoint Online via PowerShell For instant access, follow these steps. Open Windows PowerShell with admin privileges, then run Install-Module -Name PnP.PowerShell Connect-PnPOnline -Url <url> -PnPManagementShell (replace <url> with your SharePoint URL, which will then generate a code for you to insert in your SharePoint admin center.) Run these commands in PowerShell: (replace the URL after -Url in the first command with the link to your static site collection, such as https://companyabc.sharepoint.com/sites/StaticSite). (If you need help creating a SharePoint site, refer to this article from Microsoft.) Connect-PnPOnline -Url https://yourorg.sharepoint.com/sites/StaticSite $site = Get-PnPSite Set-PnPSite -Identity $site.URL -NoScriptSite $false Your SharePoint site is almost ready to host HTML files! We just need to prepare the Articulate published output for upload. Here's how. In SharePoint, choose where you will locate this project. You can create a new folder or use the Documents location created by default with all SharePoint sites. Rename all the files with a .html extension in your unzipped published output folder to .aspx (keep the same file name). To do this, right-click the file, choose Rename, and then replace .html with .aspx. (Most projects only need to rename the analytics-frame.html and story.html files. Finally, upload the published output to your SharePoint site, then click story.aspx to launch your Articulate course. This change should take effect immediately.1.5KViews0likes63CommentsQuizz Display trouble
Hello everyone, Yesterday, several users (in different centers) had Quizz display troubles. Here are some examples: They were on Google Chrome or Edge, but I have no more details since I wasn't there. Do you know if the problem was caused because of a Chrome or Edge version? It really is an issue since the quizzes are final exam to validate professional trainings. Thank you in advance for your help. Regards, Veronique.46Views0likes4CommentsVariables on layers
I'm seeing some unexpected behaviour in Storyline 3.93.33359. I want a button (or another hypothetical action) to show different layers based on a true/false variable -- in a simple example, if the variable is false, the button should show layer 1, or if true, layer 2. I want this to happen in sequence, so after layer 1 has been shown, the button should show layer 2. So I have a button on the base layer with these triggers: And a trigger on layer 1 that sets the variable to True on timeline start: Both layers are set to pause timeline of the base layer and prevent clicking on other layers. What I would expect to happen is this sequence: User clicks button. Layer 1 is shown Variable is set to true User clicks button again Layer 2 is shown Instead, layer 2 is shown on the first click. I can't work out why this would be except if all these steps were being played at once. But you'd expect layer 1 timeline start to be necessary to show layer 2 on click, and layer 1 timeline won't start until after the button click. I can't prove it as I am prevented by work policy from installing a previous version, but I think this was working differently in an earlier version as I implemented a few layers this way and didn't notice any issues until recently. I've attached a Story file for reference.25Views0likes1CommentWhat do I title this course?
Hi all. I am creating an on-demand eLearning course that covers EVERYTHING about a software system. It is to take the place of a multi-day VILT. There will be introductory info to start - foundational knowledge and background stuff. I will cover the very basics of the software - the layout of the interface, core terms and functions. And then the course will move on into more advanced topics.. some rather complex. Some not everyone will have to view, based on their role.. I had to give it a name for tracking purposes and just called it "(Software System) 101," but I really don't love that, as some content will be advanced. My brain is so taxed, I cannot come up with anything better. Please help!14Views0likes0CommentsScoring User Drawn Images in Storyline
Huh, my whole previous post just vanished. Trying again... This is a follow-up to a previous discussion post on drawing annotations on a Storyline slide. In a previous post, I demonstrated an approach allowing the user to draw on an image to indicate some kind of response. It utilized a canvas element and intercepted mouse-clicks and movements to draw paths. The next step, as pointed out by Math Notermans, was to score the user’s input in some way. There are several JavaScript libraries available that perform image comparisons, usually returning some kind of quantified pixel difference as a result. Resemble.js is one such option. It returns differences as a percentage of pixels compared to the entire image size. The question is then, how to turn this into usable score? Demo: https://360.articulate.com/review/content/d96de9cf-2fd1-45a5-a41a-4a35bf5a1735/review In this example, I made a few improvements to the annotation script that was posted previously. Most notably, I added a simple undo option that records and recreates user drawings. This also allows for the user’s drawing to maintain its sharpness after resize events, instead of the previous approach of straight scaling. I also changed it to allow drawing on a touch screen (limited testing). I included a loader for Resemble.js, and some code connected to the Check button to evaluate what the user has drawn. While this example is really just meant to demonstrate the process and help you visualize the results, the idea could easily be applied to some kind of complex user interaction that is not better served by more traditional point-and-click or drag-and-drop selections. As this demo shows, it could be particularly well-suited for having users determine the proper pathway for something in a more free-response fashion, as opposed to just selecting things from a list, or dropping shapes. After drawing a response to the prompt, clicking Check will generate a score. The score is based on the comparison of the user’s response to predetermined keys, which are images that you include when building the interaction. I used two keys here, one for the ‘correct’ answer, and one for a ‘close’ answer. You can set it up to include more key options if you need more complexity. Since all we get from Resemble is a difference score, we need to convert that into a similarity score. To do that, I followed these steps. Copy the key images to individual canvases. Create a blank canvas for comparisons. Convert these and the user drawing canvas to blobs to send to Resemble. Compare the user drawing to the blank (transparent) canvas get its base difference. Compare each of the keys in the same way to get their base difference scores. These, along with the visualized differences, are shown on the top three inset images. Then, compare each key with the user drawing to get the compared differences. The comparison order needs to be consistent here. These are shown on the lower two inset images. Calculate the similarity scores (this will be slightly different between scenarios, so you need to customize it to create the score ranges you expect. The similarity is essentially a score that ranges from 0 to 1, with 1 being the most similar. When creating your keys, you need to note what brush sizes and colors you are using. Those should be specified to the user, or preset for best results. Resemble has some comparison options, but you want to make the user’s expected response as similar to the key as you can. For the ‘Correct’ answer: The similarity is just: 1 - (compared difference) / (user base difference + key base difference) To properly range this from 0 to 1, we make also make some adjustments. We cap the (user + key) sum at 100%, and then set the Similarity floor to 0. We also divide this result by an adjustment factor. This factor is essentially the best uncorrected score you could achieve by drawing the result on the slide. Here, I could not really get much over 85%, so we normalize this to become 100%. Next, we do an adjustment that weighs the total area of the ‘Correct’ key to the total area drawn by the user. If the user draws a lot more or less than the correct answer actually contains, we do not want the result to be unduly affected. This eliminates much of the influence caused by scribbling rough answers across the general correct location. Before, scribbling could often increase the final score. This fixed that. Our adjustment is to multiply our current similarity score by: (the lesser of the user or key drawing base differences) / (the square of the greater of the base differences) We use the square in the denominator to ensure that drawing too much or too little will rapidly decrease the overall similarity score. We again cap this final adjusted similarity score at 1, ensuring a working range of 0 to 1. For the ‘Close’ answer: The idea is similar, but may need adjustment. If your close answer is similar in size to the correct answer, then the same procedure will work. In our case, I used a region around the correct answer to give partial credit. This region is roughly 2 times the size of the correct answer. As a result, we only expect a reasonable answer to cover about 50% of the close answer at best, so our minimum compared difference should be about half of the key base difference value. To compensate, we add an additional adjustment factor for the ratio between ‘close’ and ‘correct’ answers (here 2). We set our other adjustment factor like we did before, with the highest achievable uncorrected score (which unsurprisingly is about 0.4 now instead of 0.85). The Final Score is just the greater of the similarity scores times a weighting factor (1 for ‘correct’, 0.8 for ‘close’), converted to a percentage. To improve To make this more useful, you would probably want to load it on the master slide, and toggle triggers from your other slides to make comparisons. Rearrange the code to only call for the processing of the keys and blank canvas once per slide, or only after resizing, instead of each time Check is clicked to save some overhead. Probably should actively remove the previous canvas elements and event handlers when replaced. This uses a bunch of callback functions while processing blobs and comparing images, which requires several interval timers to know when each step is complete before starting the next. Might be able to do it better using promises, or restructuring the code a bit. I think Resemble just works on files, blobs, and dataURIs (e.g., base64 encoded images). Haven’t checked if it can work directly from elements or src links, but I don’t think so. Probably should load Resemble from static code to ensure functionality. Could also load key images from files instead of slide objects. That might be easier for users to locate and view however. There are other library options for comparing images. Some may be faster or more suited to your needs. If they produce a difference score, then the same approach should mostly apply. Fix the sliding aspect of the slide on mobile when drawing with touch.24Views1like0CommentsDrawing Annotation on Storyline Slide
Demo: https://360.articulate.com/review/content/518383b2-1161-408d-b9f5-adb9a6f57a11/review Inspired by code discussed on: https://img.ly/blog/how-to-draw-on-an-image-with-javascript/ Using Charts in your Storyline | Articulate - Community About This is a brief example of using an HTML canvas element to enable annotation on a Storyline slide. The example displays an image on a slide, sets a few variables, and the accessibility tags of a some slide objects. It then triggers a JavaScript trigger to create a canvas over the image, and then watches the mouse buttons and movement to allow drawing on the canvas. How it works A canvas element is created, filled with the specified base image, and inserted below a small rectangle (canvasAnchor) that is the same width as and placed directly above the image. Another rectangle (canvasClickArea) overlays and is sized to match the image. This is the area that allows drawing on the canvas (where the mouse is watched). Brush width and color can be controlled. The drawing can be cleared. It also resizes with the slide. To improve The events to watch the mouse and clear button should be better handled to allow removal when a new canvas is created. A mechanism to allow a blank base (clear) should be reinstated. Right now it just relies on the use of the initial image from the slide. Both options have their uses. Since the canvas is a raster image, resizing to very small and then very large results in poor image quality. The image can be extracted from the canvas. This could be saved or printed. More drawing options are available with the canvas element. Credit: X-ray images from https://learningradiology.com/204Views6likes8CommentsSSML Tags for AI Text-to-Speech not working for milleseconds...
Hi I'm using the AI text-to-speech (Matilda) and attempting to use SSML tags to create pauses. I'm finding that when I use whole seconds in the break time tag, it works fine. but when I try 900ms, it does nothing. Here is what I have at the moment. <speak>This is a story about four people named Everybody, <break time="900ms"/>Somebody, <break time="900ms"/>Anybody, <break time="900ms"/>and Nobody. <break time="1.2s"/> There was an important job to be done and Everybody was asked to do it. <break time="1s"/> Everybody was sure Somebody would do it. <break time="1s"/> Anybody could have done it, but Nobody did it. <break time="1s"/> Somebody got angry about that<break time="700ms"/> because it was Everybody’s job. <break time="1s"/> Everybody thought Anybody could do it, <break time="900ms"/> but Nobody realized that Everybody wouldn’t do it. <break time="1s"/> It ended up that Everybody <break time="700ms"/>blamed Somebody <break time="900ms"/> when Nobody did <break time="900ms"/> what Anybody could have done. </speak>15Views0likes2Comments