[ { "QuestionId": "76378346", "QuestionTitle": "How to modify a regular expression to validate a text box with letters and numbers only, excluding certain characters like !@#$%^&*+=", "QuestionBody": "Regular Expression for text box validation. I need to validate textbox. conditions like below\n\ncan enter numbers and letters.\nneed to avoid these characters !@#$%^&*+=\n\nCurrently i am using below regular expression. but it is not working.\n \ncan u help me to find correct regular expression\n", "AnswerId": "76378431", "AnswerBody": "The regulation expression you are currently using is for validate email addresses.\nSo you have to use a regular expression to validate textboxes, make it only contain letters and numbers and avoid the characters (!@#$%^&*+=).\n^[a-zA-Z0-9]+$\n\nThis will match any strings that is a letter or a number. It will not match any strings that any characters !@#$%^&*+=.\n\n\n\n\n\n\nHope this will help. Thank you!\nRefer to this link if you need more clarifications https://laasyasettyblog.hashnode.dev/validating-username-using-regex\nImproving my answer with the question u asked,\n\n\n\n\n\nTest Pattern\n\n\n\n\n\n\n\n\n\n\n" }, { "QuestionId": "76378408", "QuestionTitle": "How can i add new key and value in a existing list", "QuestionBody": "I'm trying to create a blind auction. So basically it will ask for your name and then your bid. After that, it will ask if there are any bidders, if yes it will ask you the name and the bid price. But after you said yes the terminal will be cleaned. So that the other bidder can't see how much the other person bid, if I run print on the [data_base] it can't print more than two keys and value.\nHere is the output:\nWhat is your name?: Gael\nWhat is your bid: $560\nAre there any other bidders? Type 'yes or 'no'.\nyes\n\n\\[({'Gael': \\['560'\\]},)\\]\nWhat is your name?: Mikey\nWhat is your bid: $350\nAre there any other bidders? Type 'yes or 'no'.\nyes\n\n\\[({'Mikey': \\['350'\\]},)\\]\nWhat is your name?: Josh\nWhat is your bid: $298\nAre there any other bidders? Type 'yes or 'no'.\nno\n\nHere is the final output:\n[({'Mikey': ['350']},), ({'Josh': ['298']},)]\nGael's name and his bid are missing.\nHere is the code:\nimport os\n\nwhile True:\nname = input(\"What is your name?: \")\nbid = input(\"What is your bid: $\")\n\n other_user = input(\"Are there any other bidders? Type 'yes or 'no'.\\n\")\n \n if other_user == 'yes':\n os.system('cls')\n data_base = [\n \n ]\n def new_user(name, bid):\n brandnew_user = {\n name: [bid]\n \n },\n \n data_base.append(brandnew_user)\n \n new_user(name, bid)\n print(data_base)\n \n \n if other_user == 'no':\n break\n\nThank you!!\nI was expecting that Gael's name and bid will be recorded. But it did not, it only recorded, Mikey and Josh.\n", "AnswerId": "76378435", "AnswerBody": "Here's a better way to organize things. Also, I'm not sure why you are creating a list of tuples of dictionaries. Why not just make data_base a dictionary and store the new entries as keys?\nimport os\n\ndata_base = []\nwhile True:\n name = input(\"What is your name?: \")\n bid = input(\"What is your bid: $\")\n\n data_base.append( {name: [bid]} )\n print(data_base)\n\n other_user = input(\"Are there any other bidders? Type 'yes or 'no'.\\n\")\n if other_user == 'no':\n break\n\nHere's what I'm talking about:\nimport os\n\ndata_base = {}\nwhile True:\n name = input(\"What is your name?: \")\n bid = input(\"What is your bid: $\")\n\n data_base[name] = [bid]\n print(data_base)\n\n other_user = input(\"Are there any other bidders? Type 'yes or 'no'.\\n\")\n if other_user == 'no':\n break\n\n" }, { "QuestionId": "76378340", "QuestionTitle": "How to repeat video with start and end time in Android Studio?", "QuestionBody": "I'm getting error in Android Studio on second \"cannot resolve symbol second\" how to fix it so that it loops from 358 to 331 in this example?\npackage com.example.myapp;\n\nimport androidx.annotation.NonNull;\nimport androidx.appcompat.app.AppCompatActivity;\n\nimport android.content.Intent;\nimport android.os.Bundle;\nimport android.view.View;\nimport android.widget.RelativeLayout;\n\nimport com.pierfrancescosoffritti.androidyoutubeplayer.core.player.YouTubePlayer;\nimport com.pierfrancescosoffritti.androidyoutubeplayer.core.player.listeners.AbstractYouTubePlayerListener;\nimport com.pierfrancescosoffritti.androidyoutubeplayer.core.player.views.YouTubePlayerView;\n\npublic class FingerStretching extends AppCompatActivity {\n\n @Override\n protected void onCreate(Bundle savedInstanceState) {\n super.onCreate(savedInstanceState);\n setContentView(R.layout.activity_finger_stretching);\n\n YouTubePlayerView youTubePlayerView = findViewById(R.id.youtube_player_view);\n getLifecycle().addObserver(youTubePlayerView);\n\n youTubePlayerView.addYouTubePlayerListener(new AbstractYouTubePlayerListener() {\n String videoId = \"mSZWSQSSEjE\";\n @Override\n public void onReady(@NonNull YouTubePlayer youTubePlayer) {\n youTubePlayer.loadVideo(videoId, 331);\n }\n \n public void onCurrentSecond(@NonNull YouTubePlayer youTubePlayer) {\n if(second == 358) youTubePlayer.seekTo(331);\n }\n });\n\n\n\n }\n}\n\ntried creating local variable second\n", "AnswerId": "76378439", "AnswerBody": "According to the source code, the signature of onCurrentSecond is\noverride fun onCurrentSecond(youTubePlayer: YouTubePlayer, second: Float)\n\nYou are not overriding it. It should be\n@Override\npublic void onCurrentSecond(@NonNull YouTubePlayer youTubePlayer, float second) {\n if(second >= 358) youTubePlayer.seekTo(331);\n}\n\nSuch kind of error is easily avoidable if you make use of the auto complete feature in the IDE. Typing onC within the AbstractYouTubePlayerListener should give you auto complete option for onCurrentSecond, selecting it should automatically write the override function for you with correct signature.\n" }, { "QuestionId": "76378344", "QuestionTitle": "how to use react function in codepen?", "QuestionBody": "How to use React functions in CodePen?\nI wrote a react function in CodePem to test React hooks, however it constantly keeps reporting errors: Uncaught ReferenceError: require is not defined.\nMy Code:\nimport {useState, useEffect,useRef } from 'react';\n\nfunction Test() {\n const [count, setCount] = useState(0);\n const prevRef = useRef();\n \n useEffect(() => {\n // const ref = useRef();\n console.log('ref----', prevRef.current);\n prevRef.current = count;\n })\n \n return (\n
\n
setCount(count+1)}>+1
\n
{`count: ${count}`}
\n
{`precount: ${prevRef.current}`}
\n
\n )\n}\n\nReactDOM.render(, document.getElementById(\"app\"));\n\n\n", "AnswerId": "76378496", "AnswerBody": "You can add a package by adjusting the settings in your Pen.\nTake a look at the following image for reference:\n\nBy doing so, it will automatically generate the necessary import statement:\nimport React, { useState, useEffect, useRef } from 'https://esm.sh/react@18.2.0';\nimport ReactDOM from 'https://esm.sh/react-dom@18.2.0';\n\nTo help you understand this process, I've created a sample code on CodePen. You can refer to this example to implement it yourself.\nHere is the codepen link to the sample code: https://codepen.io/camel2243/pen/ExdBRar\n" }, { "QuestionId": "76378323", "QuestionTitle": "Search Customers that are part of the logged on User's Business?", "QuestionBody": "The code I currently have is this, in my views.py I can't figure out how to set up my search function. All other functions work.\nmodels.py\nclass User(AbstractUser):\n \"\"\"User can be Employee or Customer\"\"\"\n\nclass Business(models.Model):\n business = models.CharField(max_length=50)\n\nclass BusinessOwner(models.Model):\n user = models.OneToOneField(User, on_delete=models.CASCADE, null=True )\n business = models.ForeignKey(Business, on_delete=models.CASCADE, null=True)\n\nclass Customer(models.Model):\n \"\"\" Customer-specific information \"\"\"\n user = models.OneToOneField(User, on_delete=models.CASCADE, null=True )\n business = models.ForeignKey(Business, on_delete=models.CASCADE, null=True)\n\nclass Employee(models.Model):\n \"\"\" Employee-specific information \"\"\"\n user = models.OneToOneField(User, on_delete=models.CASCADE, null=True)\n business = models.ForeignKey(Business, on_delete=models.CASCADE, null=True, blank=True)`\n\nforms.py\nclass UserForm(UserCreationForm):\n class Meta:\n model = User\n fields = ( \"username\", \"email\", \"password1\", \"password2\", \"first_name\", \"last_name\", )\n\n\nclass BusinessOwnerForm(forms.ModelForm):\n. . . no fields\nclass EmployeeForm(forms.ModelForm):\n. . . no fields\n\nclass CustomerForm(forms.ModelForm):\n. . . no fields\n\nclass BusinessForm(forms.ModelForm):\n class Meta:\n model = Business\n fields = ( \"business\", )\n\nviews.py (user creation process)\ndef searchUsers(request):\n qs_owned_businesses = BusinessOwner.objects.filter(user = request.user).values('business_id')\n qs_biz_customers = Customer.objects.filter(business_id__in=qs_owned_businesses)\n if request.method == \"GET\":\n query = request.GET.get('search')\n if query == '':\n query = 'None'\n results = User.objects.filter(username__icontains=query, id__in=qs_biz_customers)\n return render(request, 'search_users.html', {'query': query, 'results': results})\n \n \n\n\n#example of hows employees and customers are created in my views:\ndef employeeCreation(request):\n \"\"\"Creates an Employee\"\"\"\n if request.method == \"POST\":\n employee_form = EmployeeForm(request.POST)\n user_creation_form = UserForm(request.POST)\n if (user_creation_form.is_valid() and employee_form.is_valid()):\n employee_form.instance.business = request.user.businessowner.business\n new_user = user_creation_form.save(commit=False)\n employee_form.instance.user = new_user\n user_creation_form.save()\n employee_form.save()\n messages.success(request, \"You Have Created An Employee\" )\n return redirect(\"user-homepage\")\n else:\n messages.error(request, \"Try creating an Employee Again something went wrong.\")\n employee_form = EmployeeForm()\n user_creation_form = UserForm()\n return render (request, \"registration/employee_creation.html\", \n context={\"user_creation_form\": user_creation_form, \n \"employee_form\": employee_form,\n })\n\n\ndef customerCreation(request):\n. . . functions is exactly the same as employee creation just for a customer. The Business owner's business is used as a starting point to build employees off of. I didn't incldue that view because it's not necessary for this and stack overflow limits how much code I put here.\n\n\nsearch_users.html\n\n\n{% if results %}\n you Searched for {{ query }} . . \n {% for x in results %}\n {{ x }}

\n {% endfor %}\n\n{%endif %}```\nI have tried using Q, icontain ,.filter() and django-filter, but this is a tricky search criteria that I can't get to work.\nnavbar search feature:\n
\n
\n \n \n
\n \n
```\n\n", "AnswerId": "76378505", "AnswerBody": "Let's break this down into tasks. I'm using values() to limit the request to what we're interested in, as I can then use that result to filter further.\n#First you want to get all the businesses the logged in user owns\n#(Currently they can only own one, so you could use get rather than filter,\n#but you might change that later and this approach will still work)\nqs_owned_businesses = BusinessOwner.objects.filter(user = request.user).values('business_id')\n\n#next you want to get all the customers of those businesses\nqs_biz_customers = Customer.objects.filter(business_id__in= qs_owned_businesses).values('user_id')\n\n#finally you want to filter those customers further based on your form field\n#remember, the icontains criteria needs to refer to a field\n#here we're looking at username, but you might use last_name or something else.\nresults = User.objects.filter(username__icontains=query, id__in=qs_biz_customers)\n\nresults should now be a list of users you can cycle through in your template to show names, usernames etc.\n" }, { "QuestionId": "76378468", "QuestionTitle": "Remove remote commits on the branch in GitHub", "QuestionBody": "ALL,\nI made a local branch in my fork a long time ago and pushed some changes to it. I then submitted a PR which was passed the CI build.\nNow after some time I came back to the same machine I produced the PR but for some reason I didn't check which branch I was on and made couple of commits on the old branch and pushed them therefore screwing up the PR (it was not yet merged, due to the lack of the test code).\nNow what I'd like to do is go to Github Web interface, remove those commits, but keep them locally, because I can just generate a patch on my local machine, remove those commits, switch to the new branch and apply the patch to it.\nOr maybe even there is a better solution?\nSo how do I solve this mess?\nKeep in mind - I intend to finish the PR with the test, but those are 2 completely unrelated things.\nTIA!!\nEDIT:\nEverythig worked fine and my old branch on the original laptop is back to normal and the PR is now good.\nHowever, in order to put the unit test I had to go to a different machine and do a git pull. For some unknown reason after that the git tree on that machine becomes clogged with everything including the bad commit.\nI was able to revoke bad commits with git reset --hard N, but I fear that the same happen when I try to test my unit test on all platforms/different laptops which means my changes will be lost and I will need to redo them again for the UT on all different machines.\nCan you help me here as well?\nTIA!!\n", "AnswerId": "76378523", "AnswerBody": "After some thought, my original answer is more complicated than strictly necessary, but I'll leave it below.\nThe easiest way to get your original branch back to its old state and keep the new commits is to create a new branch then reset the old branch and force push. It looks like this:\ngit checkout old-branch\ngit branch new-branch\ngit reset --hard \ngit push -f\n\nAlternatively you can use\ngit reset --hard HEAD~n\n\nwhere n is the number of commits you want to remove from the old branch.\nNow you can do whatever you wish with the new branch, such as rebase it onto main. This might not be entirely necessary. If for example, your PR is merged, you will need to pull those changes into the new branch anyway before making the second PR. However, if you want to make a 2nd PR before the 1st is merged, then it is better to keep them separate until one of them is merged.\nTLDR\nThe easiest way to fix a remote repository is to first make the changes locally and then push, possibly force push, to GitHub or other remote.\nDetails\nYou can do this all locally first, then push to GitHub to fix the PR. First, you should create a new branch and git cherry-pick the commits that you want to keep but remove from the other branch.\nStart by getting the hashes of the commits you want:\ngit checkout old-branch\ngit log --oneline --graph\n\nCopy the commit hashes for the commits you want to move. Then do\ngit checkout -b new-branch main\n\nand for each of the hashes you copied:\ngit cherry-pick \n\nAlternatively, you can do this more easily with git rebase. You only need the hash of the oldest commit you want to keep:\ngit checkout -b new-branch old-branch\ngit rebase --onto main ~\n\nNow go back to your old branch and get rid of all the commits you no longer want:\ngit checkout old-branch\ngit reset --hard \n\nFinally force push:\ngit push -f\n\nThis will automatically update the PR back to its original state, if you used the correct hash for the git reset command.\n" }, { "QuestionId": "76378419", "QuestionTitle": "How to use async properly to get chrome.storage?", "QuestionBody": "I am creating a google chrome extension. On the popup, I am displaying a leaderboard. However, I am new to JavaScript so I don't know how to properly use async. I am using chrome.storage to get stored scores to display on the leaderboard, then sending them from background.js to score.js. My issue is that, since chrome.storage.get happens asynchronously, my findScores method does not wait for chrome.storage.get to finish before incorrectly returning a default empty score.\nHere is my code:\nbackground.js\nchrome.runtime.onMessage.addListener(\n function(request, sender, sendResponse) {\n console.log(sender.tab ?\n \"from a content script:\" + sender.tab.url :\n \"from the extension\");\n if (request.type === \"request\") {\n var scoresVar = findScores(request.table, \"All\");\n console.log(\"Sending response \" + scoresVar);\n sendResponse({scores: scoresVar})\n } \n else if (request.type === \"score\") {\n saveScore(request.website, request.score, request.tab);\n sendResponse(\"Finished adding score \" + request.score);\n }\n }\n);\n\nfunction findScores(table, website) {\n const categories = table.split(\"-\");\n if (categories.includes(\"personal\")) {\n chrome.storage.sync.get([website], function(response) {\n if (!(typeof response[website] === 'undefined')) {\n console.log(\"Found \" + response[website]);\n return response[website];\n }\n });\n } else if (categories.includes(\"global\")){\n // TODO: Add global leaderboards\n return [\"-\"];\n }\n console.log(\"Didn't find, on default\");\n return [\"-\"];\n}\n\npopup.js\nasync function requestScores(tableID) {\n var url = \"All\"\n if (tableID.includes(\"current\")) {\n var url = await getCurrentTab();\n }\n console.log(\"Sending message to load scores to \" + url);\n (async () => {\n const response = await chrome.runtime.sendMessage({type: \"request\", request: \"load scores\", table: tableID, tab: url});\n console.log(\"Received: \" + response);\n // add scores to HTML DOM\n });\n })();\n}\n\n\nMy console messages reveal that I first return a default score, which is sent to popup.js. I have tried throwing async keywords in front of functions (as well as \"await\" in front of variables like scoresVar = await findScores(request.table, \"All\") but it just caused more issues, where findScores still returned a default value, but background.j instead sent an undefined promise.\nHow can I fix my code?\n", "AnswerId": "76378558", "AnswerBody": "It is simpler to work with Promises and async/await instead of callbacks. chrome.storage.sync.get returns a Promise if you do not pass a callback.\nasync function findScores(table, website) {\n // ...\n if (categories.includes(\"personal\")) {\n const response = await chrome.storage.sync.get([website]);\n if (response[website] !== undefined) {\n console.log(\"Found \" + response[website]);\n return response[website];\n }\n }\n // ...\n}\n// ...\nchrome.runtime.onMessage.addListener(function(request, sender, sendResponse) {\n // ...\n findScores(request.table, \"All\").then(scores => {\n console.log(\"Sending response \" + scores);\n sendResponse({scores});\n });\n return true; // keep the messaging channel open for sendResponse\n});\n\nNote that the callback of onMessage should return a literal true value (documentation) in order to keep the internal messaging channel open so that sendResponse can work asynchronously.\n" }, { "QuestionId": "76383950", "QuestionTitle": "How do I change the focus of the text field on Submit?", "QuestionBody": "I have a text field and it has an onSubmit method, inside which I check for validation and then focus on another field, but for some reason the focus does not work\nonSubmitted: (value) {\n //print(\"ga test\");\n\n if (!widget.validator?.call(value)) {\n setState(() {\n showError = true;\n });\n }\n if (widget.nextFocus != null) {\n FocusScope.of(context).requestFocus(widget.nextFocus);\n }\n\n },\n\n", "AnswerId": "76384028", "AnswerBody": "I did so and it worked\nif (widget.validator != null) {\n setState(() {\n showError = !widget.validator?.call(value);\n });\n }\n if (widget.nextFocus != null) {\n FocusScope.of(context).requestFocus(widget.nextFocus);\n }\n\n" }, { "QuestionId": "76380624", "QuestionTitle": "Testng test are ignored after upgrading to Sprint Boot 3 and maven-surefire-plugin 3.1.0", "QuestionBody": "I have an application that was executing TestNG tests perfectly with maven, for example, when using a mvn clean install command.\nCurrently I have updated the application to start using Spring Boot 3.1.0, and now the tests are completely ignored. No tests are executed.\nI am using a classic testng.xml file defined on the maven-surefire-plugin:\n \n org.apache.maven.plugins\n maven-surefire-plugin\n ${maven-surefire-plugin.version}\n \n \n src/test/resources/testng.xml\n \n \n \n\nAll solutions I have found are related about the java classes ending on *Test.java but this is not applied as I am using the testng suite file. And before the update, the tests are working fine.\nWhat has been changed into Spring Boot 3 to skip my tests?\n", "AnswerId": "76380646", "AnswerBody": "Ok, I have found the \"issue\". Seems that the new versions of maven-surefire-plugin needs to include a surefire-testng extra plugin for executing it:\n \n org.apache.maven.plugins\n maven-surefire-plugin\n 3.1.0\n \n \n src/test/resources/testng.xml\n \n \n \n \n org.apache.maven.surefire\n surefire-testng\n 3.1.0\n \n \n \n\nAfter including the dependency on the plugin, now is working fine.\n" }, { "QuestionId": "76380600", "QuestionTitle": "Terragrunt - make dynamic group optional", "QuestionBody": "I'm using Okta provider to create okta_app_oauth and okta_app_group_assignments. My module looks like:\nresource \"okta_app_oauth\" \"app\" {\n\n label = var.label\n type = var.type\n grant_types = var.grant_types\n redirect_uris = var.type != \"service\" ? var.redirect_uris : null\n response_types = var.response_types\n login_mode = var.login_mode\n login_uri = var.login_uri\n post_logout_redirect_uris = var.post_logout_redirect_uris\n consent_method = var.consent_method\n token_endpoint_auth_method = var.token_endpoint_auth_method\n pkce_required = var.token_endpoint_auth_method == \"none\" ? true : var.pkce_required\n lifecycle {\n ignore_changes = [\n client_basic_secret, groups\n ]\n }\n}\n\nresource \"okta_app_group_assignments\" \"app\" {\n app_id = okta_app_oauth.app.id\n dynamic \"group\" {\n for_each = var.app_groups\n content {\n id = group.value[\"id\"]\n priority = group.value[\"priority\"]\n }\n }\n}\n\nAnd it works when I assign groups to application, but when I don't want to assign groups, I get error:\n│ Error: Invalid index\n│ \n│ on main.tf line 26, in resource \"okta_app_group_assignments\" \"app\":\n│ 26: id = group.value[\"id\"]\n│ ├────────────────\n│ │ group.value is empty map of dynamic\n│ \n│ The given key does not identify an element in this collection value.\n\n\nin addition, my app_groups variable looks like:\nvariable \"app_groups\" {\n description = \"Groups assigned to app\"\n type = list(map(any))\n default = [{}]\n}\n\nI was trying to use lookup(group, \"priority\", null), but it wasn't resolving my problem. Can somebody help me with solving this?\n", "AnswerId": "76380785", "AnswerBody": "You can make the block optional as follows:\n dynamic \"group\" {\n for_each = length(var.app_groups) > 0 : var.app_groups : []\n content {\n id = group.value[\"id\"]\n priority = group.value[\"priority\"]\n }\n }\n\nalso your default value for app_groups should be:\nvariable \"app_groups\" {\n description = \"Groups assigned to app\"\n type = list(map(any))\n default = []\n}\n\n" }, { "QuestionId": "76378487", "QuestionTitle": "Group by and select rows based on if value combinations exist", "QuestionBody": "I have a table PetsTable:\n\n\n\n\nId\nType\nkey\nvalue\n\n\n\n\n1\n\"Cat\"\n10\n5\n\n\n1\n\"Cat\"\n9\n2\n\n\n2\n\"dog\"\n10\n5\n\n\n1\n\"Cat\"\n8\n4\n\n\n1\n\"Cat\"\n6\n3\n\n\n2\n\"dog\"\n8\n4\n\n\n2\n\"dog\"\n6\n3\n\n\n3\n\"Cat\"\n13\n5\n\n\n3\n\"Cat\"\n10\n0\n\n\n3\n\"Cat\"\n8\n0\n\n\n\n\nHow to insert this data into a new table MyPets from PetsTable with these conditions:\n\nGroup by Id\nOnly select rows when in the group exists (key = 10 and value = 5) and (key = 8 and value = 4) and (key = 6 and value = 3)\nIf exists key = 9, then mark hasFee = 1 else hasFee = 0\n\nFinal table should look like:\n\n\n\n\nId\nType\nhasFee\n\n\n\n\n1\n\"Cat\"\n1\n\n\n2\n\"dog\"\n0\n\n\n\n", "AnswerId": "76378586", "AnswerBody": "One approach is to use window functions to evaluate your conditions, which you can then apply as conditions using a CTE.\nThis creates the data you desire, its then trivial to insert into a table of your choice.\ncreate table Test (Id int, [Type] varchar(3), [Key] int, [Value] int);\n\ninsert into Test (Id, [Type], [Key], [Value])\nvalues\n(1, 'Cat', 10, 5),\n(1, 'Cat', 9, 2),\n(2, 'Dog', 10, 5),\n(1, 'Cat', 8, 4),\n(1, 'Cat', 6, 3),\n(2, 'Dog', 8, 4),\n(2, 'Dog', 6, 3),\n(3, 'Cat', 13, 5),\n(3, 'Cat', 10, 0),\n(3, 'Cat', 8, 0);\n\nwith cte as (\n select *\n , sum(case when [Key] = 10 and [Value] = 5 then 1 else 0 end) over (partition by Id) Cond1\n , sum(case when [Key] = 8 and [Value] = 4 then 1 else 0 end) over (partition by Id) Cond2\n , sum(case when [Key] = 6 and [Value] = 3 then 1 else 0 end) over (partition by Id) Cond3\n , sum(case when [Key] = 9 then 1 else 0 end) over (partition by Id) HasFee\n from Test\n)\nselect Id, [Type], HasFee\nfrom cte\nwhere Cond1 = 1 and Cond2 = 1 and Cond3 = 1\ngroup by Id, [Type], HasFee;\n\nReturns:\n\n\n\n\nId\nType\nHasFee\n\n\n\n\n1\nCat\n1\n\n\n2\nDog\n0\n\n\n\n\nNote: If you provide your sample data in this format (DDL+DML) you make it much easier for people to assist.\ndb<>fiddle\n" }, { "QuestionId": "76380579", "QuestionTitle": "How to store multiple commands in a bash variable (similar to cat otherscript.sh)", "QuestionBody": "For work I'm needing to connect to test nodes and establish a vnc connection so you can see the desktop remotely. It's a manual process with a bunch of commands that need to be executed in order. Perfect for automation using a bash script. The problem is that some commands need to be executed on the remote node after an ssh connection is established.\nCurrently I've got it working like this, where startVNC is a seperate bash file which stores the commands that need to be executed on the remote node after an ssh connection is established.\ncat startVNC | sed -e \"s/\\$scaling/$scaling/\" -e \"s/\\$address/$address/\" -e \"s/\\$display/$display/\" | ssh -X maintain@$host\n\nFor my question the contents of startVNC don't really matter, just that multiple commands can be executed in order. It could be:\necho \"hello\"\nsleep 1\necho \"world\"\n\nWhile for personal use this solution is fine, I find it a bit of a bother that this needs to be done using two separate bash files. If I want to share this file (which I do) it'd be better if it was just one file. My question is, is it possible to mimic the output from cat in some way using a variable?\n", "AnswerId": "76380826", "AnswerBody": "Well, you could do:\na=\"echo 'hello'\\nsleep 2\\necho world\\n\"\necho -e $a\n# output-> echo 'hello'\n# output-> sleep 2\n# output-> echo world\necho -e $a | bash\n# output-> hello\n# waiting 2 secs\n# output-> world\n\nThe -e in echo enables the interpretation of the \\n.\n" }, { "QuestionId": "76383957", "QuestionTitle": "How to set ID header in Spring Integration Kafka Message?", "QuestionBody": "I have a demo Spring Integration project which is receiving Kafka messages, aggregating them, and then releasing them. I'm trying to add JdbcMessageStore to the project. The problem is that it failing with error:\nCaused by: java.lang.IllegalArgumentException: Cannot store messages without an ID header\n at org.springframework.util.Assert.notNull(Assert.java:201) ~[spring-core-5.2.15.RELEASE.jar:5.2.15.RELEASE]\n at org.springframework.integration.jdbc.store.JdbcMessageStore.addMessage(JdbcMessageStore.java:314) ~[spring-integration-jdbc-5.3.8.RELEASE.jar:5.3.8.RELEASE]\n\nAfter debugging I found that it requires the UUID header id in this message. But the problem is that I can't manually set the Kafka header id - it is forbidden (the same as timestamp header) - I tried to do this in Kafka producer in different project.\nIf I'm using IDEA plugin named Big Data Tools and send a message from there I'm able to set id header but it is received by my project as an array of bytes and it is failing with error\nIllegalArgumentException Incorrect type specified for header 'id'. Expected [UUID] but actual type is [B]\n\nI can't find any solution on how to resolve this issue. I need to set somehow this id header to be able to store messages in the database.\nThanks in advance\n", "AnswerId": "76384041", "AnswerBody": "The KafkaMessageDrivenChannelAdapter has an option:\n/**\n * Set the message converter to use with a record-based consumer.\n * @param messageConverter the converter.\n */\npublic void setRecordMessageConverter(RecordMessageConverter messageConverter) {\n\nWhere you can set a MessagingMessageConverter with:\n/**\n * Generate {@link Message} {@code ids} for produced messages. If set to {@code false},\n * will try to use a default value. By default set to {@code false}.\n * @param generateMessageId true if a message id should be generated\n */\npublic void setGenerateMessageId(boolean generateMessageId) {\n this.generateMessageId = generateMessageId;\n}\n\n/**\n * Generate {@code timestamp} for produced messages. If set to {@code false}, -1 is\n * used instead. By default set to {@code false}.\n * @param generateTimestamp true if a timestamp should be generated\n */\npublic void setGenerateTimestamp(boolean generateTimestamp) {\n this.generateTimestamp = generateTimestamp;\n}\n\nset to true.\nThis way the Message created from a ConsumerRecord will have respective id and timestamp headers.\nYou also simply can have a \"dummy\" transformer to return incoming payload and the framework will create a new Message where those headers are generated.\n" }, { "QuestionId": "76383902", "QuestionTitle": "Concatenate onto Next Row", "QuestionBody": "I have some SQL that does some manipulation to the data i.e. filling in empty columns.\nSELECT *,\n ModifiedLineData = CASE\n WHEN Column2 = '' AND LineData NOT LIKE ',,,0,,,,0'\n THEN CONCAT(STUFF(LineData, CHARINDEX(',', LineData, CHARINDEX(',', LineData) + 1), 0, '\"No PO Number\"'), ',\"\"')\n ELSE CONCAT(LineData, ',\"\"')\n END\nFROM (\n SELECT\n *,\n Column2 = CONVERT(XML, '' + REPLACE((SELECT ISNULL(LineData, '') FOR XML PATH('')), ',', '') + '').value('/s[2]', 'varchar(100)')\n FROM [dbo].[Temp_Raw_Data]\n WHERE LineData NOT LIKE ',,,0,,,,0'\n) AS Subquery\n\nNow lets say this returns\n\n\n\n\nFileName\nLineNumber\nLineData\nColumn2\nModifiedLineData\n\n\n\n\nfile1\n4\n1232,,\"product-1\", 1,0\n\n1232,NA,\"product-1\", 1,0\n\n\nfile2\n7\n\"failed\"\nNULL\n\"failed\"\n\n\nfile3\n8\n1235,,\"product-2\", 1,0\n\n1235,NA,\"product-2\", 1,0\n\n\n\n\nHow can I modify this query so that if Column2 is NULL then it would concatenate the LineData onto the next row (ModifiedLineData) else just concatenate a ,\"\" and then remove that NULL result (if possible else it doesnt matter) so that my result would look like:\n\n\n\n\nFileName\nLineNumber\nLineData\nColumn2\nModifiedLineData\n\n\n\n\nfile1\n4\n1232,,\"product-1\", 1,0\n\n1232,NA,\"product-1\", 1,0,\"\"\n\n\nfile3\n8\n1235,,\"product-2\", 1,0\n\n1235,NA,\"product-2\", 1,0,\"failed\"\n\n\n\n\nI tried playing around with LEAD() but couldn't get it how i wanted.\nNote: Two null rows are not possible to be together. This is due to the nature of the data. The next row should simply be the next available row when selecting all rows as they are imported one by 1.\nUpdated Query that isn't concatenating:\nSELECT * \n FROM (SELECT FileName, LineNumber, LineData, Column2, \n CASE WHEN LAG(Column2) OVER(ORDER BY LineNumber) IS NULL\n THEN CONCAT_WS(', ',\n ModifiedLineData, \n LAG(ModifiedLineData) OVER(ORDER BY LineNumber))\n ELSE ModifiedLineData\n END AS ModifiedLineData\n FROM (\n SELECT *,\n ModifiedLineData = CASE\n WHEN Column2 = '' AND LineData NOT LIKE ',,,0,,,,0'\n THEN CONCAT(STUFF(LineData, CHARINDEX(',', LineData, CHARINDEX(',', LineData) + 1), 0, '\"No PO Number\"'), '')\n ELSE CONCAT(LineData, '')\n END\n FROM (\n SELECT *,\n Column2 = CONVERT(XML, '' + REPLACE((SELECT ISNULL(LineData, '') FOR XML PATH('')), ',', '') + '').value('/s[2]', 'varchar(100)')\n FROM [backstreet_WMS_Optimizer].[dbo].[Temp_GoodsIn_Raw_Data]\n WHERE LineData NOT LIKE ',,,0,,,,0'\n ) AS Subquery\n ) AS cte\n) AS Subquery\nWHERE Column2 IS NOT NULL\norder by FileName, LineNumber\n\n", "AnswerId": "76384109", "AnswerBody": "Given that you can't have consecutive NULL values, using LEAD/LAG should be suitable for this task. Without knowledge of your original data, we can work on your query and add on top two subqueries, last of which is optional:\n\nthe inner adds the information needed to the record successive to \"Column2=NULL\" records\nthe outer removes records having those null values\n\nSELECT * \n FROM (SELECT FileName, LineNumber, LineData, Column2, \n CASE WHEN LAG(Column2) OVER(ORDER BY LineNumber) IS NULL\n THEN CONCAT_WS(', ',\n ModifiedLineData, \n LAG(ModifiedLineData) OVER(ORDER BY LineNumber))\n ELSE ModifiedLineData\n END AS ModifiedLineData\n FROM ) cte\nWHERE Column2 IS NOT NULL \n\nOutput:\n\n\n\n\nFileName\nLineNumber\nLineData\nColumn2\nModifiedLineData\n\n\n\n\nfile1\n4\n1232,,\"product-1\", 1,0\n\n1232,NA,\"product-1\", 1,0\n\n\nfile3\n8\n1235,,\"product-2\", 1,0\n\n1235,NA,\"product-2\", 1,0\"failed\"\n\n\n\n\nCheck the demo here.\n" }, { "QuestionId": "76378480", "QuestionTitle": "How do I get my main content to take up the rest of the space left over after the header and footer?", "QuestionBody": "I'm working through The Odin Project and I'm having trouble making my main content take up the rest of the space of the browser.\nRight now it looks like this:\n\nThe 1px solid red border is as far as the main content goes. I have tried this but it's not allowing for a fixed header and footer. I have also tried some other flex solutions. Those are commented out in the code.\nAm I just doing this whole thing wrong? Is there a standard way that I don't know about?\nindex.html:\n\n
\n

\n MY AWESOME WEBSITE\n

\n
\n\n
\n \n
\n
Lorem ipsum dolor sit amet consectetur adipisicing elit. Tempora, eveniet? Dolorem\n dignissimos\n maiores non delectus possimus dolor nulla repudiandae vitae provident quae, obcaecati ipsam unde impedit\n corrupti veritatis minima porro?
\n
Lorem ipsum dolor sit amet consectetur adipisicing elit. Quasi quaerat qui iure ipsam\n maiores\n velit tempora, deleniti nesciunt fuga suscipit alias vero rem, corporis officia totam saepe excepturi\n odit\n ea.\n
\n
Lorem ipsum dolor sit amet consectetur, adipisicing elit. Nobis illo ex quas, commodi\n eligendi\n aliquam ut, dolor, atque aliquid iure nulla. Laudantium optio accusantium quaerat fugiat, natus officia\n esse\n autem?
\n
Lorem ipsum dolor sit amet consectetur adipisicing elit. Necessitatibus nihil impedit eius\n amet\n adipisci dolorum vel nostrum sit excepturi corporis tenetur cum, dolore incidunt blanditiis. Unde earum\n minima\n laboriosam eos!
\n
Lorem ipsum dolor sit amet consectetur, adipisicing elit. Nobis illo ex quas, commodi\n eligendi\n aliquam ut, dolor, atque aliquid iure nulla. Laudantium optio accusantium quaerat fugiat, natus officia\n esse\n autem?
\n
Lorem ipsum dolor sit amet consectetur adipisicing elit. Necessitatibus nihil impedit eius\n amet\n adipisci dolorum vel nostrum sit excepturi corporis tenetur cum, dolore incidunt blanditiis. Unde earum\n minima\n laboriosam eos!
\n
\n
\n\n
\n The Odin Project ❤️\n
\n\n\n\n\nstyle-07.css:\n:root{\n --header-height: 72px;\n}\nbody {\n font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, 'Open Sans', 'Helvetica Neue', sans-serif;\n margin: 0;\n min-height: 100vh;\n height: 100%;\n}\n\n.main-content{\n display: flex;\n height: 100%; /* If I use px units it will force the main content to go down but I know that is not ideal. */\n padding-top: var(--header-height); \n flex-direction: row;\n border: 1px solid red;\n /* Things I have tried from other answers*/\n /* flex: 1 1 auto; */\n /* height: calc(100% - var(--header-height)); */\n}\n\n.sidebar{\n flex-shrink: 0;\n}\n\n.content {\n padding: 32px;\n display: flex;\n flex-wrap: wrap;\n}\n\n.card {\n width: 300px;\n padding: 16px;\n margin: 16px;\n}\n\n.header {\n position: fixed;\n top: 0;\n left: 0;\n right: 0;\n display: flex;\n align-items: center;\n height: var(--header-height);\n background: darkmagenta;\n color: white;\n padding: 0px 15px;\n}\n\nh1 {\n font-weight: 1000;\n}\n\n.footer {\n height: var(--header-height);\n background: #eee;\n color: darkmagenta;\n position: fixed;\n bottom: 0;\n left: 0;\n right: 0;\n width: 100%;\n height: 5%;\n display: flex;\n justify-content: center;\n align-items: center;\n}\n\n.sidebar {\n width: 300px;\n background: royalblue;\n box-sizing: border-box;\n padding: 16px;\n}\n\n.card {\n border: 1px solid #eee;\n box-shadow: 2px 4px 16px rgba(0, 0, 0, .06);\n border-radius: 4px;\n}\n\nul{\n list-style-type: none;\n margin: 0;\n padding: 0;\n}\n\na {\n text-decoration: none;\n color: white;\n font-size: 24px;\n}\n\nli{\n margin-bottom: 16px;\n}\n\n", "AnswerId": "76378588", "AnswerBody": "You can use flex diplay on body instead of using instead of fixed on header and footer and make the body display flex with column direction, then for main-content all you need is to set flex: 1 and remove padding top, flex: 1 will make sure that main-content take any remaining space in the parent. Set the body height to height: 100vh and overflow: hidden, for man-content, set overflow: auto.\nAdditionally, To make sidebar sticky when scrolling, I added position: relative; to main-content and position: sticky; to the sidebar.\nTo force header and footer heights and prevent them to be squeezed by the flex position, use min-height instead of height as I modified in the code.\nTry to view the run code in full page, if you have any further questions, comment below.\n\n\n:root {\n --header-height: 72px;\n}\n\nbody {\n font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, 'Open Sans', 'Helvetica Neue', sans-serif;\n margin: 0;\n height: 100vh;\n overflow:hidden;\n \n display: flex; \n flex-direction: column;\n}\n\n.main-content {\n flex: 1;\n display: flex;\n overflow-y: auto;\n /* If I use px units it will force the main content to go down but I know that is not ideal. */\n flex-direction: row;\n border: 1px solid red;\n /* Things I have tried from other answers*/\n /* flex: 1 1 auto; */\n /* height: calc(100% - var(--header-height)); */\n \n position: relative;\n}\n\n\n\n.content {\n padding: 32px;\n display: flex;\n flex-wrap: wrap;\n}\n\n.card {\n width: 300px;\n padding: 16px;\n margin: 16px;\n}\n\n.header {\n\n \n display: flex;\n align-items: center;\n min-height: var(--header-height);\n background: darkmagenta;\n color: white;\n padding: 0px 15px;\n}\n\nh1 {\n font-weight: 1000;\n}\n\n.footer {\n min-height: var(--header-height);\n background: #eee;\n color: darkmagenta;\n \n width: 100%;\n height: 5%;\n display: flex;\n justify-content: center;\n align-items: center;\n}\n\n.sidebar {\n width: 300px;\n background: royalblue;\n box-sizing: border-box;\n padding: 16px;\n \n position: sticky;\n top: 0;\n \n white-space: nowrap;\n min-height: 250px;\n}\n\n.card {\n border: 1px solid #eee;\n box-shadow: 2px 4px 16px rgba(0, 0, 0, .06);\n border-radius: 4px;\n}\n\nul {\n list-style-type: none;\n margin: 0;\n padding: 0;\n}\n\na {\n text-decoration: none;\n color: white;\n font-size: 24px;\n}\n\nli {\n margin-bottom: 16px;\n}\n\n
\n

\n MY AWESOME WEBSITE\n

\n
\n\n
\n \n
\n
Lorem ipsum dolor sit amet consectetur adipisicing elit. Tempora, eveniet? Dolorem dignissimos maiores non delectus possimus dolor nulla repudiandae vitae provident quae, obcaecati ipsam unde impedit corrupti veritatis minima porro?
\n
Lorem ipsum dolor sit amet consectetur adipisicing elit. Quasi quaerat qui iure ipsam maiores velit tempora, deleniti nesciunt fuga suscipit alias vero rem, corporis officia totam saepe excepturi odit ea.\n
\n
Lorem ipsum dolor sit amet consectetur, adipisicing elit. Nobis illo ex quas, commodi eligendi aliquam ut, dolor, atque aliquid iure nulla. Laudantium optio accusantium quaerat fugiat, natus officia esse autem?\n
\n
Lorem ipsum dolor sit amet consectetur adipisicing elit. Necessitatibus nihil impedit eius amet adipisci dolorum vel nostrum sit excepturi corporis tenetur cum, dolore incidunt blanditiis. Unde earum minima laboriosam eos!
\n
Lorem ipsum dolor sit amet consectetur, adipisicing elit. Nobis illo ex quas, commodi eligendi aliquam ut, dolor, atque aliquid iure nulla. Laudantium optio accusantium quaerat fugiat, natus officia esse autem?\n
\n
Lorem ipsum dolor sit amet consectetur adipisicing elit. Necessitatibus nihil impedit eius amet adipisci dolorum vel nostrum sit excepturi corporis tenetur cum, dolore incidunt blanditiis. Unde earum minima laboriosam eos!
\n
\n
\n\n
\n The Odin Project ❤️\n
\n\n\n\n\n\n\n" }, { "QuestionId": "76384080", "QuestionTitle": "Can't initialize variables inside of when statement in Kotlin", "QuestionBody": "For whatever reason, my Kotlin program won't initialize variables assigned inside a when statement. Here's the code:\nimport kotlin.random.Random\nimport kotlin.random.nextInt\n\nval mood: String\n\nwhen(Random.nextInt(1..2)) {\n 1 -> {\n mood = \"loud\"\n println(\"$mood\")\n }\n 2 -> {\n mood = \"quiet\"\n println(\"$mood\")\n }\n}\n\nprintln(\"$mood\")\n\nThe lines inside the when statement are printed, but when I run the last line, I get a \"Variable 'mood' must be initialized\" error.\nI don't know what I could possibly be doing wrong here...\n", "AnswerId": "76384128", "AnswerBody": "In Kotlin, variables declared with the val keyword must be initialized at the point of declaration or in the constructor of the class. In your code, the mood variable is declared without an initial value, and you are trying to assign values to it inside the when statement. However, the compiler is unable to determine if either of the branches will be executed at runtime, so it doesn't consider the variable as fully initialized.\nTo fix this issue, you can either declare the mood variable as a var instead of a val or assign an initial value to it when declaring it. Here's an updated version of your code using a var:\nimport kotlin.random.Random\nimport kotlin.random.nextInt\nvar mood: String\n\nwhen (Random.nextInt(1..2)) {\n 1 -> {\n mood = \"loud\"\n println(\"$mood\")\n }\n 2 -> {\n mood = \"quiet\"\n println(\"$mood\")\n }\n}\n\nprintln(\"$mood\")\n\nBy using a var instead of a val, you indicate that the variable can be reassigned later. Since the mood variable is assigned within both branches of the when statement, the compiler no longer complains about it being uninitialized.\nNote that the order of the when branches should cover all possible cases, otherwise you might encounter a \"when expression must be exhaustive\" warning. In your case, the range of nextInt is 1 to 2, so the two branches should be sufficient.\n" }, { "QuestionId": "76380728", "QuestionTitle": "Flutter Deep Link Firebase in iOS", "QuestionBody": "My deep link works fine on Android and transfers information to the app, but it doesn't work on iOS\nFirebase Link\nhttps://dvzpl.com\n\nmy short link\nhttps://dvzpl.com/6BG2\n\nmy domain\nhttps://dovizpanel.com/\n\nmy associated domain\n\n aps-environment\n development\n com.apple.developer.associated-domains\n \n webcredentials:dvzpl.com\n applinks:dvzpl.com\n \n\n\nhow to fix ?\nWhen I open the short link in the browser, it goes into the app but does not transfer the data in ios , android working not problams\nFirebaseDynamicLinksCustomDomains\n\n https://dovizpanel.com/blog\n https://dovizpanel.com/exchanger\n https://dovizpanel.com/link\n\n\n", "AnswerId": "76380908", "AnswerBody": "If you are using a custom domain for firebase dynamic links follow the instructions below:\nIn your Xcode project's Info.plist file, create a key called FirebaseDynamicLinksCustomDomains and set it to your app's Dynamic Links URL prefixes. For example:\nFirebaseDynamicLinksCustomDomains\n\n https://dvzpl.com\n\n\nYou can find more details directly in the Firebase documentation.\n" }, { "QuestionId": "76384218", "QuestionTitle": "How to toggle/display content individually in ReactJS", "QuestionBody": "my question is how can I toggle/display the \"Some text\" content on onClick individually?.\nI can use different function and state for every div an it is working but I know this is not the correct way to do it .\nCan you help me with this guys? Thanks\nThis is my code\nfunction App() {\n const [loaded, setLoaded] = useState(true);\n const [show, setShow] = useState(false);\n\n const handleShow = () => {\n setShow(!show);\n };\n\n return (\n
\n {loaded && (\n
\n
\n

Title

\n {show && (\n
\n

Some text

\n
\n )}\n
\n
\n

Title

\n {show && (\n
\n

Some text

\n
\n )}\n
\n
\n

Title

\n {show && (\n
\n

Some text

\n
\n )}\n
\n
\n )}\n
\n );\n}\n\n", "AnswerId": "76384269", "AnswerBody": "You could create a custom component for your card that handles the state for each card:\nfunction Card() {\n const [show, setShow] = useState(false);\n\n const handleShow = () => {\n setShow(state => !state);\n };\n\n return
\n

Title

\n\n {show && (\n
\n

Some text

\n
\n )}\n
\n}\n\nAnd use it in your app:\nfunction App() {\n const [loaded, setLoaded] = useState(true);\n\n return (\n
\n {loaded && (\n
\n \n \n \n
\n )}\n
\n );\n}\n\n" }, { "QuestionId": "76383839", "QuestionTitle": "Revoking permission to install plugins?", "QuestionBody": "The following query was used as part of a security audit to identify users with access to install/uninstall server plugins at the database level.\nSELECT user, host FROM mysql.db WHERE db = 'mysql' and (insert_priv='y') or (delete_priv='y') or (insert_priv='y' and delete_priv='y');\n\nI need to revoke that permission from the users that are listed. Is there a specific privilege I revoke to do this? If so, I can't find it. Or would I simply UPDATE the insert_priv and delete_priv fields directly in the mysql.db table? I'm not a DBA but the closest thing we have at the moment.\n", "AnswerId": "76384284", "AnswerBody": "You are able to install plugins when you have INSERT permissions on the mysql.plugin table, see INSTALL PLUGIN:\n\nTo use INSTALL PLUGIN, you must have the INSERT privilege for the mysql.plugin table.\n\nSo when you have database wide INSERT permissions on the (internal administrative) database mysql, then you can install plugins.\nThe same goes for the UNINSTALL PLUGIN statement, see UNINSTALL PLUGIN\n\nTo use UNINSTALL PLUGIN, you must have the DELETE privilege for the mysql.plugin table.\n\nRemove the insert_priv and delete_priv privileges for the mysql database, your \"normal\" MySQL user accounts shouldn't be able to write in this database anyway.\n" }, { "QuestionId": "76378670", "QuestionTitle": "pandas dataframe query not working with where", "QuestionBody": "I am new to pandas, I have this data frame:\ndf['educ1']\nwhich gives\n1 4\n2 3\n3 3\n4 4\n5 1\n ..\n28461 3\n28462 2\n28463 3\n28464 2\n28465 4\nName: educ1, Length: 28465, dtype: int64\n\nwhen I try querying with\ndt=df[df.educ1 > 1]\n\nIt's working fine returning multiple rows, but when I try\ncollege_grad_mask=(df.educ1 > 1)\ndf.where(college_grad_mask).dropna().head()\n\nIt gives 0 rows, I wonder what is wrong here?\n", "AnswerId": "76378715", "AnswerBody": "You likely have NaNs in many columns, try to subset:\ndf.where(college_grad_mask).dropna(subset=['educ1']).head()\n\nOr better:\ndf[college_grad_mask].head()\n\n" }, { "QuestionId": "76378383", "QuestionTitle": "Problem when scoring new data -- tidymodels", "QuestionBody": "I'm learning tidymodels. The following code runs nicely:\nlibrary(tidyverse)\nlibrary(tidymodels)\n\n# Draw a random sample of 2000 to try the models\n\nset.seed(1234)\n\ndiamonds <- diamonds %>% \n sample_n(2000)\n \ndiamonds_split <- initial_split(diamonds, prop = 0.80, strata=\"price\")\n\ndiamonds_train <- training(diamonds_split)\ndiamonds_test <- testing(diamonds_split)\n\nfolds <- rsample::vfold_cv(diamonds_train, v = 10, strata=\"price\")\n\nmetric <- metric_set(rmse,rsq,mae)\n\n# Model KNN \n\nknn_spec <-\n nearest_neighbor(\n mode = \"regression\", \n neighbors = tune(\"k\"),\n engine = \"kknn\"\n ) \n\nknn_rec <-\n recipe(price ~ ., data = diamonds_train) %>%\n step_log(all_outcomes()) %>% \n step_normalize(all_numeric_predictors()) %>% \n step_dummy(all_nominal_predictors())\n\nknn_wflow <- \n workflow() %>% \n add_model(knn_spec) %>%\n add_recipe(knn_rec)\n\nknn_grid = expand.grid(k=c(1,5,10,30))\n\nknn_res <- \n tune_grid(\n knn_wflow,\n resamples = folds,\n metrics = metric,\n grid = knn_grid\n )\n\ncollect_metrics(knn_res)\nautoplot(knn_res)\n\nshow_best(knn_res,metric=\"rmse\")\n\n# Best KNN \n\nbest_knn_spec <-\n nearest_neighbor(\n mode = \"regression\", \n neighbors = 10,\n engine = \"kknn\"\n ) \n\nbest_knn_wflow <- \n workflow() %>% \n add_model(best_knn_spec) %>%\n add_recipe(knn_rec)\n\nbest_knn_fit <- last_fit(best_knn_wflow, diamonds_split)\n\ncollect_metrics(best_knn_fit)\n\n\nBut when I try to fit the best model on the training set and applying it to the test set I run into problems. The following two lines give me the error : \"Error in step_log():\n! The following required column is missing from new_data in step 'log_mUSAb': price.\nRun rlang::last_trace() to see where the error occurred.\"\n# Predict Manually\n\nf1 = fit(best_knn_wflow,diamonds_train)\np1 = predict(f1,new_data=diamonds_test)\n\n", "AnswerId": "76378734", "AnswerBody": "This problem is related to log transform outcome variable in tidymodels workflow\nFor log transformations to the outcome, we strongly recommend that those transformation be done before you pass them to the recipe(). This is because you are not guaranteed to have an outcome when predicting (which is what happens when you last_fit() a workflow) on new data. And the recipe fails.\nYou are seeing this here as when you predict on a workflow() object, it only passes the predictors, as it is all that it needs. Hence why you see this error.\nSince log transformations isn't a learned transformation you can safely do it before.\ndiamonds_train$price <- log(diamonds_train$price)\n\nif (!is.null(diamonds_test$price)) {\n diamonds_test$price <- log(diamonds_test$price)\n}\n\n" }, { "QuestionId": "76380693", "QuestionTitle": "How to name a term created in the formula when calling `lm()`?", "QuestionBody": "Is it possible to name a term created in a formula? This is the scenario:\nCreate a toy dataset:\nset.seed(67253)\nn <- 100\nx <- sample(c(\"A\", \"B\", \"C\"), size = n, replace = TRUE)\ny <- sapply(x, switch, A = 0, B = 2, C = 1) + rnorm(n, 2)\ndat <- data.frame(x, y)\nhead(dat)\n#> x y\n#> 1 B 4.5014474\n#> 2 C 4.0252796\n#> 3 C 2.4958761\n#> 4 C 0.6725571\n#> 5 B 4.3364206\n#> 6 C 3.9798909\n\nFit a regression model:\nout <- lm(y ~ x, dat)\nsummary(out)\n#> \n#> Call:\n#> lm(formula = y ~ x, data = dat)\n#> \n#> Residuals:\n#> Min 1Q Median 3Q Max \n#> -2.07296 -0.52161 -0.03713 0.53898 2.12497 \n#> \n#> Coefficients:\n#> Estimate Std. Error t value Pr(>|t|) \n#> (Intercept) 2.1138 0.1726 12.244 < 2e-16 ***\n#> xB 1.6772 0.2306 7.274 9.04e-11 ***\n#> xC 0.5413 0.2350 2.303 0.0234 * \n#> ---\n#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1\n#> \n#> Residual standard error: 0.9297 on 97 degrees of freedom\n#> Multiple R-squared: 0.3703, Adjusted R-squared: 0.3573 \n#> F-statistic: 28.52 on 2 and 97 DF, p-value: 1.808e-10\n\nFit the model again, but use \"C\" as the reference group:\nout2 <- lm(y ~ relevel(factor(x), ref = \"C\"), dat)\nsummary(out2)\n#> \n#> Call:\n#> lm(formula = y ~ relevel(factor(x), ref = \"C\"), data = dat)\n#> \n#> Residuals:\n#> Min 1Q Median 3Q Max \n#> -2.07296 -0.52161 -0.03713 0.53898 2.12497 \n#> \n#> Coefficients:\n#> Estimate Std. Error t value Pr(>|t|) \n#> (Intercept) 2.6551 0.1594 16.653 < 2e-16 ***\n#> relevel(factor(x), ref = \"C\")A -0.5413 0.2350 -2.303 0.0234 * \n#> relevel(factor(x), ref = \"C\")B 1.1359 0.2209 5.143 1.41e-06 ***\n#> ---\n#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1\n#> \n#> Residual standard error: 0.9297 on 97 degrees of freedom\n#> Multiple R-squared: 0.3703, Adjusted R-squared: 0.3573 \n#> F-statistic: 28.52 on 2 and 97 DF, p-value: 1.808e-10\n\nThe variable, x, was re-leveled in the second call to lm(). This is done in the formula and so the name of this term is relevel(factor(x), ref = \"C\").\nCertainly, we can create the term before calling lm(), e.g.:\ndat$x2 <- relevel(factor(x), ref = \"C\")\nout3 <- lm(y ~ x2, dat)\nsummary(out3)\n#> \n#> Call:\n#> lm(formula = y ~ x2, data = dat)\n#> \n#> Residuals:\n#> Min 1Q Median 3Q Max \n#> -2.07296 -0.52161 -0.03713 0.53898 2.12497 \n#> \n#> Coefficients:\n#> Estimate Std. Error t value Pr(>|t|) \n#> (Intercept) 2.6551 0.1594 16.653 < 2e-16 ***\n#> x2A -0.5413 0.2350 -2.303 0.0234 * \n#> x2B 1.1359 0.2209 5.143 1.41e-06 ***\n#> ---\n#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1\n#> \n#> Residual standard error: 0.9297 on 97 degrees of freedom\n#> Multiple R-squared: 0.3703, Adjusted R-squared: 0.3573 \n#> F-statistic: 28.52 on 2 and 97 DF, p-value: 1.808e-10\n\nHowever, can I create a term and name it in the formula? If yes, how?\n", "AnswerId": "76380922", "AnswerBody": "adapted from the info in this comment : Rename model terms in lm object for forecasting\nset.seed(67253)\nn <- 100\nx <- sample(c(\"A\", \"B\", \"C\"), size = n, replace = TRUE)\ny <- sapply(x, switch, A = 0, B = 2, C = 1) + rnorm(n, 2)\ndat <- data.frame(x, y)\n\nout <- lm(y ~ x, dat)\nsummary(out)\n\nout2 <- lm(y ~ x2, transform(dat,\n x2=relevel(factor(x), ref = \"C\")))\nsummary(out2)\n\n" }, { "QuestionId": "76378708", "QuestionTitle": "Translating Stata to R yields different results", "QuestionBody": "I am trying to translate a Stata code from a paper into R.\nThe Stata code looks like this:\ng tau = year - temp2 if temp2 > temp3 & (bod<. | do<. | lnfcoli<.)\n\nMy R translation looks like this:\ndata <- data %>%\n mutate(tau = if_else((temp2 > temp3) & \n (is.na(bod) | is.na(do) | is.na(lnfcoli)), \n year - temp2,\n NA_integer_))\n\nThe problem is that when I run each code I get different results.\nThis is the result I get when I run the code in Stata:\n1 Year | temp2 | temp3 | bod | do | lnfcoli | tau |\n2 1986 | 1995 | 1986 | 3.2 | 7.2 | 2.1. | -9 |\n\nThis is the result I get when I run the code in R:\n1 Year | temp2 | temp3 | bod | do | lnfcoli | tau |\n2 1986 | 1995 | 1986 | 3.2 | 7.2 | 2.1. | NA |\n\nDo you know what might be wrong with my R code or what should I modify to get the same output?\n", "AnswerId": "76378750", "AnswerBody": "None of bod, do or lnfcoli are missing (NA), so your logic returns FALSE and returns NA_integer_ (false= in the if_else). Stata treats . or missing values as positive infinity, so that check is actually looking for not missing.\nSo the equivalent in R/dplyr is probably:\ndata %>%\n mutate(\n tau = if_else(\n (temp2 > temp3) & (!(is.na(bod) | is.na(do) | is.na(lnfcoli))),\n year-temp2,\n NA_integer_\n )\n )\n\n# year temp2 temp3 bod do lnfcoli tau\n#1 1986 1995 1986 3.2 7.2 2.1 -9\n\n" }, { "QuestionId": "76383859", "QuestionTitle": "Why sometimes local class cannot access constexpr variables defined in function scope", "QuestionBody": "This c++ code cannot compile:\n#include \n\nint main()\n{\n constexpr int kInt = 123;\n struct LocalClass {\n void func(){\n const int b = std::max(kInt, 12); \n // ^~~~ \n // error: use of local variable with automatic storage from containing function\n std::cout << b;\n }\n };\n LocalClass a;\n a.func();\n return 0;\n}\n\nBut this works:\n#include \n#include \n\nint main()\n{\n constexpr int kInt = 123;\n struct LocalClass {\n void func(){\n const int b = std::max((int)kInt, 12); // added an extra conversion \"(int)\"\n std::cout << b;\n const int c = kInt; // this is also ok\n std::cout << c;\n const auto d = std::vector{kInt}; // also works\n std::cout << d[0];\n }\n };\n LocalClass a;\n a.func();\n return 0;\n}\n\nTested under C++17 and C++20, same behaviour.\n", "AnswerId": "76384297", "AnswerBody": "1. odr-using local entities from nested function scopes\nNote that kInt still has automatic storage duration - so it is a local entity as per:\n\n6.1 Preamble [basic.pre]\n(7) A local entity is a variable with automatic storage duration, [...]\n\n\nIn general local entities cannot be odr-used from nested function definitions (as in your LocalClass example)\nThis is given by:\n\n6.3 One-definition rule [basic.def.odr]\n(10) A local entity is odr-usable in a scope if:\n[...]\n(10.2) for each intervening scope between the point at which the entity is introduced and the scope (where *this is considered to be introduced within the innermost enclosing class or non-lambda function definition scope), either:\n\nthe intervening scope is a block scope, or\nthe intervening scope is the function parameter scope of a lambda-expression that has a simple-capture naming the entity or has a capture-default, and the block scope of the lambda-expression is also an intervening scope.\n\nIf a local entity is odr-used in a scope in which it is not odr-usable, the program is ill-formed.\n\nSo the only times you can odr-use a local variable within a nested scope are nested block scopes and lambdas which capture the local variable.\ni.e.:\nvoid foobar() {\n int x = 0;\n\n {\n // OK: x is odr-usable here because there is only an intervening block scope\n std::cout << x << std::endl;\n }\n\n // OK: x is odr-usable here because it is captured by the lambda\n auto l = [&]() { std::cout << x << std::endl; };\n\n // NOT OK: There is an intervening function definition scope\n struct K {\n int bar() { return x; }\n };\n}\n\n11.6 Local class declarations [class.local] contains a few examples of what is and is not allowed, if you're interested.\n\nSo if use of kInt constitutes an odr-use, your program is automatically ill-formed.\n2. Is naming kInt always an odr-use?\nIn general naming a variable constitutes an odr-use of that variable:\n\n6.3 One-definition rule [basic.def.odr]\n(5) A variable is named by an expression if the expression is an id-expression that denotes it. A variable x that is named by a potentially-evaluated expression E is odr-used by E unless [...]\n\nBut because kInt is a constant expression the special exception (5.2) could apply:\n\n6.3 One-definition rule [basic.def.odr]\n(5.2) x is a variable of non-reference type that is usable in constant expressions and has no mutable subobjects, and E is an element of the set of potential results of an expression of non-volatile-qualified non-class type to which the lvalue-to-rvalue conversion is applied, or\n\nSo naming kInt is not deemed an odr-use as long as it ...\n\nis of non-reference type (✓)\nis usable in constant expressions (✓)\ndoes not contain mutable members (✓)\n\nand the expression that contains kInt ...\n\nmust produce a non-volatile-qualified non-class type (✓)\nmust apply the lvalue-to-rvalue conversion (?)\n\nSo we pass almost all the checks for the naming of kInt to not be an odr-use, and therefore be well-formed.\nThe only condition that is not always true in your example is the lvalue-to-rvalue conversion that must happen.\nIf the lvalue-to-rvalue conversion does not happen (i.e. no temporary is introduced), then your program is ill-formed - if it does happen then it is well-formed.\n// lvalue-to-rvalue conversion will be applied to kInt:\n// (well-formed)\nconst int c = kInt; \nstd::vector v{kInt}; // vector constructor takes a std::size_t\n\n// lvalue-to-rvalue conversion will NOT be applied to kInt:\n// (it is passed by reference to std::max)\n// (ill-formed)\nstd::max(kInt, 12); // std::max takes arguments by const reference (!)\n\nThis is also the reason why std::max((int)kInt, 12); is well-formed - the explicit cast introduces a temporary variable due to the lvalue-to-rvalue conversion being applied.\n" }, { "QuestionId": "76380850", "QuestionTitle": "How do I keep and append placeholder text into the selected value in React Select?", "QuestionBody": "Let's say I have a React Select with a placeholder ('Selected Value: '), and I want to keep the placeholder and append it into the selected value so that it looks something like ('Selected Value: 1'). Is there any way to do it?\nimport Select from \"react-select\";\n\nexport default function App() {\n const options = [\n { value: 1, label: 1 },\n { value: 2, label: 2 },\n { value: 3, label: 3 },\n { value: 4, label: 4 }\n ];\n const placeholder = \"Selected Value: \";\n return (\n
\n setSelectBoxValue(event.value)} />\n
\n );\n}\n\n" }, { "QuestionId": "76380934", "QuestionTitle": "Method not allowed, flask, python", "QuestionBody": "Installed FlareSolverr in docker.\ncURL work correctly and return the correct response.\ncurl -L -X POST 'http://localhost:8191/v1' -H 'Content-Type: application/json' --data-raw '{\n \"cmd\": \"request.get\",\n \"url\":\"http://google.com\",\n \"maxTimeout\": 60000\n}'\n\nbut when using from python + flask I get an error - 405 Method is not allowed\ndef get_parsed_page(url, delay=0.5):\ndata = {\n \"cmd\": \"request.get\",\n \"url\": url,\n \"maxTimeout\": 60000\n}\nheaders = {\"Content-Type\": \"application/json\"}\ntime.sleep(delay)\nprint(requests.get(\"***:8191/v1\", headers=headers, data=data))\nreturn BeautifulSoup(requests.get(\"***:8191/v1\", headers=headers, data=data).text, 'lxml')\n\n", "AnswerId": "76380982", "AnswerBody": "you are using a GET request in your python code. It should be a POST request. Use requests.post\n" }, { "QuestionId": "76378592", "QuestionTitle": "Can't not write three value in 1 line", "QuestionBody": "I was having this problem about last week in this code\na = int(input())\nb = int(input())\nc = int(input())\nprint(min(a+b,b+c,c+a))\n\n\nso when I enter three input like this: 2 5 6 (three interger in 1 line)\nIt show me a error:\n\nFile \"c:\\Users\\Administrator\\Documents\\Code\\Python\\baitap(LQDOJ)\\EZMIN.py\", line 1, in \n a = int(input())\nValueError: invalid literal for int() with base 10: '2 5 6' \n\nand I see that it only identify 'a' but not identify 'b' , 'c' so can you show me how to fix it or are there other ways to write it in 1 line?\n", "AnswerId": "76378760", "AnswerBody": "Method 1\nThe error you're encountering is because you're trying to convert the entire string '2 5 6' into an integer using the int() function. However, the int() function expects a single integer value, not a string containing multiple numbers.\ncode:\na = int(input())\nb = int(input())\nc = int(input())\n\nx = a + b\ny = b + c\nz = c + a\n\nmin_value = x\nif y < min_value:\n min_value = y\nif z < min_value:\n min_value = z\n\nprint(\"The minimum value is:\", min_value)\n\nyou'll be prompted to enter the values for a, b, and c separately, and the code will correctly calculate and display the minimum value among the three sums.\nMethod 2\nUsing This one is more optimize solution\ninput_values = input()\ninput_list = list(map(int, input_values.split()))\n\nmin_value = min(input_list[0] + input_list[1], input_list[1] + input_list[2], input_list[2] + input_list[0])\n\nprint(\"The minimum value is:\", min_value)\n\n\nThe split() method splits the input string at spaces, creating a list of string elements.\nThe map() function applies the int() function to each element of the split list, converting them into integers.\nlist() is used to convert the resulting map object into a list of integers.\nThe resulting list is stored in input_list for further calculations.\n\n" }, { "QuestionId": "76383945", "QuestionTitle": "Typescript type extension", "QuestionBody": "I try to define a custom interfaces like this :\nexport interface IAPIRequest\n{\n body: B;\n params: P;\n query: Q;\n}\n\nThis type is supposed to be extended in a lot of other types for each request mu API is supposed to handle.\nFor example :\nexport interface ILoginRequest extends IAPIRequest<{ email: string; password: string; }>, undefined, undefined> {}\n\nIt works a little but everytime I use this interface, I must provide all the properties even if they are undefined.\nExample:\nconst login = async ({ body }: ILoginRequest) => \n{\n ...\n}\n\nconst response = await login({ body: { email: 'mail@test.com', password: 'verystrongpassword' }, params: undefined, query: undefined });\n\nIt doesn't work if I don't provide the undefined properties.\nHow can I define an abstract type for IAPIRequest that would avoid me from providing undefined values ?\nPS : I've tried this as well\nexport interface IAPIRequest\n{\n body?: B;\n params?: P;\n query?: Q;\n}\n\nEven for IAPIRequest where none of B, P, or Q allow undefined, I still get that the properties might be undefined\n", "AnswerId": "76384326", "AnswerBody": "TypeScript doesn't automatically treat properties that accept undefined to be optional (although the converse, treating optional properties as accepting undefined, is true, unless you've enabled --exactOptionalPropertyTypes). There is a longstanding open feature request for this at microsoft/TypeScript#12400 (the title is about optional function parameters, not object properties, but the issue seems to have expanded to include object properties also). Nothing has been implemented there, although the discussion describes various workarounds.\nLet's define our own workaround; a utility type UndefinedIsOptional that produces a version of T such that any property accepting undefined is optional. It could look like this:\ntype UndefinedIsOptional = (Partial &\n { [K in keyof T as undefined extends T[K] ? never : K]: T[K] }\n) extends infer U ? { [K in keyof U]: U[K] } : never\n\nThat's a combination of Partial which turns all properties optional, and a key remapped type that suppresses all undefined-accepting properties. The intersection of those is essentially what you want (an intersection of an optional prop and a required prop is a required prop) but I use a technique described at How can I see the full expanded contract of a Typescript type? to display the type in a more palatable manner.\nThen we can define your type as\ntype IAPIRequest = UndefinedIsOptional<{\n body: B;\n params: P;\n query: Q;\n}>\n\nand note that this must be a type alias and not an interface because the compiler needs to know exactly which properties will appear (and apparently their optional-ness) to be an interface. This won't matter much with your example code but you should be aware of it.\nLet's test it out:\ntype ILR = IAPIRequest<{ email: string; password: string; }, undefined, undefined>\n/* type ILR = {\n body: {\n email: string;\n password: string;\n };\n params?: undefined;\n query?: undefined;\n} */\n\nThat looks like what you wanted, so you can define your ILoginRequest interface:\ninterface ILoginRequest extends IAPIRequest<\n { email: string; password: string; }, undefined, undefined> {\n}\n\nAlso, let's just look at what happens when the property includes undefined but is not only undefined:\ntype Other = IAPIRequest<{ a: string } | undefined, number | undefined, { b: number }>;\n/* type Other = {\n body?: {\n a: string;\n } | undefined;\n params?: number | undefined;\n query: {\n b: number;\n };\n} */\n\nHere body and params are optional because undefined is possible, but query is not because undefined is impossible.\nPlayground link to code\n" }, { "QuestionId": "76380868", "QuestionTitle": "How to configure the Quarkus Mailer extension to allow dynamic 'from' email addresses based on user?", "QuestionBody": "This Quarkus mailer guide requires that the sending email is preconfigured in property file: quarkus.mailer.from=YOUREMAIL@gmail.com. However, my use case for email includes unique originator email based on user. Using the provided method looks something like:\npublic void sendEmail(EmailSender emailSender) {\n\n // Send to each recipient\n emailMessageRepository.findByEmailSenderId(emailSender.getId())\n .forEach(emailMessage ->\n mailer.send(\n Mail.withText(emailMessage.getEmail(),\n emailSender.getSubject(),\n emailSender.getMessage())\n );\n );\n}\n\nHow can I include the sender's email address (i.e. 'from') when the Mail.withText() method only provides for recipient email?\n", "AnswerId": "76380985", "AnswerBody": "The documention showcases how to use multimailer (Multiple From Addresses)\nquarkus.mailer.from=your-from-address@gmail.com \nquarkus.mailer.host=smtp.gmail.com\n\nquarkus.mailer.aws.from=your-from-address@gmail.com \nquarkus.mailer.aws.host=${ses.smtp}\nquarkus.mailer.aws.port=587\n\nquarkus.mailer.sendgrid.from=your-from-address@gmail.com \nquarkus.mailer.sendgrid.host=${sendgrid.smtp-host}\nquarkus.mailer.sendgrid.port=465\n\nSo you would write:\nquarkus.mailer.from=default@gmail.com \n\nquarkus.mailer.aws.from=your_aws@gmail.com \n\nquarkus.mailer.sendgrid.from=your_sendgrid@gmail.com \n\nThen you would inject them as shown below and use them based on whom you want to send with:\n@Inject\n@MailerName(\"aws\") \nMailer mailer;\n\n\n@Inject\n@MailerName(\"sendgrid\") \nMailer mailer;\n\naws and sendgrid at the names between quarkus.mailer.xxx.from\nhttps://quarkus.io/guides/mailer-reference#multiple-mailer-configurations\n\n\nThe Quarkus Mailer is implemented on top of the Vert.x Mail Client,\nproviding an asynchronous and non-blocking way to send emails.\n\nIf you need fine control on how the mail is sent, for instance if you need to retrieve the message ids, you can inject the underlying client, and use it directly:\n@Inject MailClient client;\n\nThen use it:\nMailMessage message = new MailMessage();\nmessage.setFrom(\"user@example.com (Example User)\");\nmessage.setTo(\"recipient@example.org\");\nmessage.setCc(\"Another User \");\nmessage.setText(\"this is the plain message text\");\nmessage.setHtml(\"this is html text vertx.io\");\n\nTo send using MailClient:\nmailClient.sendMail(message)\n .onSuccess(System.out::println)\n .onFailure(Throwable::printStackTrace);\n\nhttps://quarkus.io/guides/mailer-reference#using-the-underlying-vert-x-mail-client\nhttps://vertx.io/docs/vertx-mail-client/java/\n" }, { "QuestionId": "76380847", "QuestionTitle": "Why Drop trait is only executed at the end of the scope, instead of after the last use?", "QuestionBody": "This is a question from rust onomicon # lifetime\nThe first example can compile, as x is a reference and the compiler can infer its lifetime as minimal as the last use here :println!(), so x is dropped after this line.\nlet mut data = vec![1, 2, 3];\nlet x = &data[0];\nprintln!(\"{}\", x);\n// This is OK, x is no longer needed\ndata.push(4);\n\nBut the case is different when x is a struct implemented Drop trait.\n#[derive(Debug)]\nstruct X<'a>(&'a i32);\n\nimpl Drop for X<'_> {\n fn drop(&mut self) {}\n}\n\nlet mut data = vec![1, 2, 3];\nlet x = X(&data[0]);\nprintln!(\"{:?}\", x);\ndata.push(4);\n// Here, the destructor is run and therefore this'll fail to compile.\n\nThe onomicon says in this case, drop() is only executed at the very end of a scope, so x keeps valid until the last line.\nBut why the compiler cannot minimize the lifetime of x to the last use? And is applying drop() just after the last use has some nontrivial side effects when x is implemented Drop trait?\n", "AnswerId": "76380988", "AnswerBody": "The primary reason is that it was once defined to be like that, and now changing it isn't possible any more because it wouldn't be backwards-compatible and might break stuff.\nYour code is easily fixable by introducing a nested scope, though, which is (to my understanding) best practice in those situations:\n#[derive(Debug)]\nstruct X<'a>(&'a i32);\n\nimpl Drop for X<'_> {\n fn drop(&mut self) {}\n}\n\nfn main() {\n let mut data = vec![1, 2, 3];\n {\n let x = X(&data[0]);\n println!(\"{:?}\", x);\n }\n data.push(4);\n}\n\nX(1)\n\nAlternatively, you could drop it manually:\n#[derive(Debug)]\nstruct X<'a>(&'a i32);\n\nimpl Drop for X<'_> {\n fn drop(&mut self) {}\n}\n\nfn main() {\n let mut data = vec![1, 2, 3];\n\n let x = X(&data[0]);\n println!(\"{:?}\", x);\n drop(x);\n\n data.push(4);\n}\n\nX(1)\n\n" }, { "QuestionId": "76384211", "QuestionTitle": "When it's necessary to create a custom spring boot starter rather than create a new microservice?", "QuestionBody": "I've a microservice architecture, and need some common logic.\nWhen it's necessary to create a custom spring boot starter rather than create a new microservice?\n", "AnswerId": "76384348", "AnswerBody": "In my experience, creating a new microservice from the ground up is generally due to preventing any monoliths occurring. Microservices should generally have one job and then do it well. You don't want to muddy up the implementation and purpose of your microservice by adding unrelated operations.\nThere are many design patterns for the \"types\" you could be creating but I won't go into too much detail there. Overall, based on what business purpose you are solving you can select your design and begin development. Different designs should be separated and not combined into monolithic styles. Here is a good article showcasing design options: https://www.openlegacy.com/blog/microservices-architecture-patterns/\nIf you find your self having to re-create multiple microservice serving different use cases you can always utilize a tool such as yeoman to speed up creating these new projects. You can build a generator that will give you a working template so you don't have to spend the time re developing from the ground up each time you need a different service.\nHere is a guide that I wrote recently on creating your own yeoman generator: https://medium.com/@dylanlamott/building-a-yeoman-generator-line-by-line-6966debb39a3\n" }, { "QuestionId": "76378628", "QuestionTitle": "How to fix 'int' object has no attribute 'astype' error when sending WhatsApp messages to large number of contacts using Python and pandas?", "QuestionBody": "AttributeError: 'int' object has no attribute 'astype' in automatic WhatsApp message sender script\n\nThe following is an automated WhatsApp message sender script I partially developed. I tried the following script and it worked fine with an excel with 5 numbers in it. However, I tried upscaling it to 1700+ numbers, and I get the following traceback:\nTraceback (most recent call last):\n File \"c:\\Users\\MSI\\Desktop\\AutoSenderPY\\main.py\", line 9, in \n cellphone = data.loc[i,'Cellphone'].astype(str)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nAttributeError: 'int' object has no attribute 'astype'*\n\nThe script is the following:\nimport pandas as pd\nimport webbrowser as web\nimport pyautogui as pg\nimport time\n\ndata = pd.read_excel(\"book1.xlsx\", sheet_name='sheet1')\n\nfor i in range(len(data)):\n cellphone = data.loc[i,'Cellphone'].astype(str) \n \n message = \"Test Message\"\n \n web.open(\"https://web.whatsapp.com/send?phone=\" + cellphone + \"&text=\" + message)\n \n time.sleep(5.5) \n pg.click(1230,964) \n time.sleep(1) \n pg.press('enter') \n time.sleep(2) \n pg.hotkey('ctrl', 'w') \n time.sleep(1)\n\nWhy is that happening, and how can I get it working for those 1700+ numbers?\n", "AnswerId": "76378769", "AnswerBody": "Try using -\ncellphone = str(data.loc[i,'Cellphone'])\n\nI think loc returns a single element of type \"numpy.int64\", calling the \"str\" should be enough.\n" }, { "QuestionId": "76378370", "QuestionTitle": "SQL How to return record ID's not included in table 2 from table 1 based off of user ID in table 2", "QuestionBody": "I have two tables, one has course name and course ID. The second table has the ID of the students and the course ID they have taken. I need to find all the class ID’s of the classes a student hasn’t taken. For example, in table 2 student 03 has taken classes 01 and 02 but not 03 and 04 from table one. The course ID’s 03 and 04 from table one are what I need to return (all the classes student 03 hasn't taken). I've tried numerous queries and the last one I tried is:\nSELECT table1.* FROM table1\nLEFT JOIN table2\nON\n table1.course_ID = table2.course_ID\nWHERE\n table2.course_ID IS NULL\nAND \n table2.user_ID != 3\n\nAppreciate your help!\ntable 1\n\n\n\n\ncourse_ID\ncourseName\n\n\n\n\n01\nmath\n\n\n02\nEnglish\n\n\n03\nart\n\n\n04\nmusic\n\n\n\n\ntable 2\n\n\n\n\ncert_Id\ncourse_ID\nuser_ID\n\n\n\n\n01\n01\n03\n\n\n02\n02\n03\n\n\n\n", "AnswerId": "76378800", "AnswerBody": "As per your current requirement below query will work\n SELECT * FROM table1 t1 \n WHERE course_ID \n NOT IN (SELECT course_ID FROM table2 WHERE user_ID =3)\n\nIf you have more records in table2 and if you need to populate more than one student's details then you have to use other logic\nIf you want to modify your query then use as below\n SELECT table1.* FROM table1 \n LEFT JOIN table2 ON table1.course_ID = table2.course_ID \n AND table2.user_ID = 3 \n WHERE table2.course_ID IS NULL\n\n" }, { "QuestionId": "76380967", "QuestionTitle": "Why is SQL Server Pivot being case sensitive on TabTypeId instead of treating it as the actual column name?", "QuestionBody": "In T-Sql I am parsing JSON and using PIVOT.\nSelect * from (select [key],convert(varchar,[value])[value] \nfrom openjson ('{\"Name\":\"tew\",\"TabTypeId\":9,\"Type\":3}'))A\n pivot(max(value) for [key] in ([Name],tabTypeId,[Type]))b\n\nIt is not treating tabTypeId as equal to TabTypeId. I am getting NULL for tabTypeId.\nIf I use TabTypeId I get the value 9.\nWhy is it happening?\n", "AnswerId": "76381059", "AnswerBody": "It's not PIVOT that is case sensitive, it's the data returned from OPENJSON that is. If you check the data returned from it, you'll see that the column key is a binary collation:\nSELECT name, system_type_name, collation_name\nFROM sys.dm_exec_describe_first_result_set(N'SELECT [key], CONVERT(varchar, [value]) AS [value] FROM OPENJSON(''{\"Name\":\"tew\",\"TabTypeId\":9,\"Type\":3}'');',NULL,NULL)\n\n\n\n\n\nname\nsystem_type_name\ncollation_name\n\n\n\n\nkey\nnvarchar(4000)\nLatin1_General_BIN2\n\n\nvalue\nvarchar(30)\nSQL_Latin1_General_CP1_CI_AS\n\n\n\n\nFor binary collations the actual bytes of the characters must match. As such N'tabTypeId' and N'TabTypeId' are not equal as N'T' and N't' have the binary values 0x5400 and 0x7400.\nThough I am unsure why you are using PIVOT at all; just define your columns in your OPENJSON call:\nSELECT name, --Columns are intentionally demonstrating non-case sensitivity\n tabTypeId,\n type\nFROM OPENJSON('{\"Name\":\"tew\",\"TabTypeId\":9,\"Type\":3}')\n WITH (Name varchar(3),\n TabTypeId int,\n Type int);\n\nNote that in the WITH clause of OPENJSON the column names are still case sensitive. tabTypeId int would also yield NULL. If you \"had\" to have a column called tabTypeId defined prior to the SELECT you would use tabTypeId int '$.TabTypeId' instead.\n" }, { "QuestionId": "76384091", "QuestionTitle": "PSQL / SQL: Is it possible to further optimize this query with requiring write access to the database?", "QuestionBody": "I have a query here that uses four subqueries inside a single CTE, and each subquery is scanning every row of another CTE for each row in itself. I would think that this is very inefficient.\nAre there any SQL optimizations that I can implement now that the proof of concept is finished?\nI don't have write access to the database, so optimizations would be required within the select clause.\nWITH datetable AS (\n SELECT generate_series(\n DATE_TRUNC('week', (SELECT MIN(created_at) FROM org_accounts.deleted_users)),\n DATE_TRUNC('week', now()),\n '1 week'::INTERVAL\n )::DATE AS week_start\n), all_users AS (\n SELECT\n id,\n registered_at,\n NULL AS deleted_at\n FROM org_accounts.users\n WHERE status = 'active'\n AND org_accounts.__user_is_qa(id) <> 'Y'\n AND email NOT LIKE '%@org%'\n \n UNION ALL\n \n SELECT\n id,\n created_at AS registered_at,\n deleted_at\n FROM org_accounts.deleted_users\n WHERE deleter_id = id\n AND email NOT LIKE '%@org%'\n), weekly_activity AS (\n SELECT\n DATE_TRUNC('week', date)::DATE AS week_start,\n COUNT(DISTINCT user_id) AS weekly_active_users\n FROM (\n SELECT user_id, date\n FROM org_storage_extra.stats_user_daily_counters \n WHERE type in ('created_file', 'created_folder', 'created_secure_fetch')\n \n UNION ALL\n \n SELECT user_id, date\n FROM ipfs_pinning_facility.stats_user_daily_counters\n WHERE type <> 'shares_viewed_by_others'\n ) activity_ids_dates\n WHERE EXISTS(SELECT 1 from all_users WHERE id = user_id)\n GROUP BY week_start\n), preprocessed AS (\n SELECT\n week_start,\n (\n SELECT COUNT(DISTINCT id)\n FROM all_users\n WHERE registered_at < week_start\n AND (deleted_at IS NULL OR deleted_at > week_start)\n ) AS actual_users,\n (\n SELECT COUNT(DISTINCT id)\n FROM all_users\n WHERE deleted_at < week_start + '1 week'::INTERVAL\n ) AS cumulative_churned_users,\n (\n SELECT COUNT(DISTINCT id)\n FROM all_users\n WHERE registered_at >= week_start\n AND registered_at < week_start + '1 week'::INTERVAL\n ) AS weekly_new_users,\n (\n SELECT COUNT(DISTINCT id)\n FROM all_users\n WHERE deleted_at >= week_start\n AND deleted_at < week_start + '1 week'::INTERVAL\n ) AS weekly_churned_users,\n COALESCE(weekly_active_users, 0) AS weekly_active_users\n FROM datetable dt\n LEFT JOIN weekly_activity USING (week_start)\n ORDER BY week_start DESC\n)\nSELECT\n week_start AS for_week_of, \n actual_users + cumulative_churned_users AS cumulative_users,\n cumulative_churned_users,\n cumulative_churned_users::FLOAT / NULLIF((actual_users + cumulative_churned_users)::FLOAT, 0) AS cumulated_churn_rate,\n actual_users,\n weekly_new_users,\n weekly_churned_users,\n weekly_active_users,\n weekly_churned_users::FLOAT / NULLIF(actual_users::FLOAT, 0) AS weekly_churn_rate \nFROM preprocessed;\n\nResults of query analysis:\nQUERY PLAN\nSubquery Scan on preprocessed (cost=40875.45..7501783.95 rows=1000 width=68) (actual time=1553.471..13613.116 rows=231 loops=1)\n Output: preprocessed.week_start, (preprocessed.actual_users + preprocessed.cumulative_churned_users), preprocessed.cumulative_churned_users, ((preprocessed.cumulative_churned_users)::double precision / NULLIF(((preprocessed.actual_users + preprocessed.cumulative_churned_users))::double precision, '0'::double precision)), preprocessed.actual_users, preprocessed.weekly_new_users, preprocessed.weekly_churned_users, preprocessed.weekly_active_users, ((preprocessed.weekly_churned_users)::double precision / NULLIF((preprocessed.actual_users)::double precision, '0'::double precision))\n Buffers: shared hit=287734 read=1964, temp read=274840 written=873\n CTE all_users\n -> Append (cost=0.00..30953.99 rows=70293 width=32) (actual time=0.099..1313.372 rows=71228 loops=1)\n Buffers: shared hit=285995 read=1964\n -> Seq Scan on org_accounts.users (cost=0.00..27912.65 rows=70009 width=32) (actual time=0.099..1289.469 rows=70007 loops=1)\n Output: users.id, users.registered_at, NULL::timestamp with time zone\n Filter: ((users.email !~~ '%@mailinator%'::text) AND (users.email !~~ '%@org%'::text) AND (users.email !~~ '%testaccnt%'::text) AND (users.status = 'active'::text) AND ((org_accounts.__user_is_qa(users.id))::text <> 'Y'::text))\n Rows Removed by Filter: 9933\n Buffers: shared hit=285269 read=1964\n -> Seq Scan on org_accounts.deleted_users (cost=0.00..1986.94 rows=284 width=32) (actual time=0.014..14.267 rows=1221 loops=1)\n Output: deleted_users.id, deleted_users.created_at, deleted_users.deleted_at\n Filter: ((deleted_users.email !~~ '%@mailinator%'::text) AND (deleted_users.email !~~ '%@org%'::text) AND (deleted_users.email !~~ '%testaccnt%'::text) AND (deleted_users.deleter_id = deleted_users.id))\n Rows Removed by Filter: 61826\n Buffers: shared hit=726\n -> Merge Left Join (cost=9921.47..7470794.97 rows=1000 width=44) (actual time=1553.467..13612.496 rows=231 loops=1)\n Output: (((generate_series(date_trunc('week'::text, $5), date_trunc('week'::text, now()), '7 days'::interval)))::date), (SubPlan 2), (SubPlan 3), (SubPlan 4), (SubPlan 5), COALESCE(weekly_activity.weekly_active_users, '0'::bigint)\n Inner Unique: true\n Merge Cond: ((((generate_series(date_trunc('week'::text, $5), date_trunc('week'::text, now()), '7 days'::interval)))::date) = weekly_activity.week_start)\n Buffers: shared hit=287734 read=1964, temp read=274840 written=873\n -> Sort (cost=1601.45..1603.95 rows=1000 width=4) (actual time=10.108..10.250 rows=231 loops=1)\n Output: (((generate_series(date_trunc('week'::text, $5), date_trunc('week'::text, now()), '7 days'::interval)))::date)\n Sort Key: (((generate_series(date_trunc('week'::text, $5), date_trunc('week'::text, now()), '7 days'::interval)))::date) DESC\n Sort Method: quicksort Memory: 35kB\n Buffers: shared hit=726\n -> Result (cost=1514.10..1541.62 rows=1000 width=4) (actual time=9.986..10.069 rows=231 loops=1)\n Output: ((generate_series(date_trunc('week'::text, $5), date_trunc('week'::text, now()), '7 days'::interval)))::date\n Buffers: shared hit=726\n InitPlan 6 (returns $5)\n -> Aggregate (cost=1514.09..1514.10 rows=1 width=8) (actual time=9.974..9.975 rows=1 loops=1)\n Output: min(deleted_users_1.created_at)\n Buffers: shared hit=726\n -> Seq Scan on org_accounts.deleted_users deleted_users_1 (cost=0.00..1356.47 rows=63047 width=8) (actual time=0.006..4.332 rows=63047 loops=1)\n Output: deleted_users_1.id, deleted_users_1.email, deleted_users_1.created_at, deleted_users_1.deleter_id, deleted_users_1.deleted_at, deleted_users_1.registration_app\n Buffers: shared hit=726\n -> ProjectSet (cost=0.00..5.03 rows=1000 width=8) (actual time=9.984..10.030 rows=231 loops=1)\n Output: generate_series(date_trunc('week'::text, $5), date_trunc('week'::text, now()), '7 days'::interval)\n Buffers: shared hit=726\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.000..0.001 rows=1 loops=1)\n -> Sort (cost=8320.02..8320.52 rows=200 width=12) (actual time=1475.315..1475.418 rows=159 loops=1)\n Output: weekly_activity.weekly_active_users, weekly_activity.week_start\n Sort Key: weekly_activity.week_start DESC\n Sort Method: quicksort Memory: 32kB\n Buffers: shared hit=287008 read=1964, temp read=412 written=872\n -> Subquery Scan on weekly_activity (cost=8050.90..8312.37 rows=200 width=12) (actual time=1466.686..1475.279 rows=159 loops=1)\n Output: weekly_activity.weekly_active_users, weekly_activity.week_start\n Buffers: shared hit=287008 read=1964, temp read=412 written=872\n -> GroupAggregate (cost=8050.90..8310.37 rows=200 width=12) (actual time=1466.685..1475.254 rows=159 loops=1)\n Output: ((date_trunc('week'::text, (\"*SELECT* 1\".date)::timestamp with time zone))::date), count(DISTINCT \"*SELECT* 1\".user_id)\n Group Key: ((date_trunc('week'::text, (\"*SELECT* 1\".date)::timestamp with time zone))::date)\n Buffers: shared hit=287008 read=1964, temp read=412 written=872\n -> Sort (cost=8050.90..8136.22 rows=34130 width=20) (actual time=1466.668..1468.872 rows=23005 loops=1)\n Output: ((date_trunc('week'::text, (\"*SELECT* 1\".date)::timestamp with time zone))::date), \"*SELECT* 1\".user_id\n Sort Key: ((date_trunc('week'::text, (\"*SELECT* 1\".date)::timestamp with time zone))::date)\n Sort Method: quicksort Memory: 2566kB\n Buffers: shared hit=287008 read=1964, temp read=412 written=872\n -> Hash Join (cost=1586.09..5481.12 rows=34130 width=20) (actual time=1411.350..1462.022 rows=23005 loops=1)\n Output: (date_trunc('week'::text, (\"*SELECT* 1\".date)::timestamp with time zone))::date, \"*SELECT* 1\".user_id\n Inner Unique: true\n Hash Cond: (\"*SELECT* 1\".user_id = all_users.id)\n Buffers: shared hit=287008 read=1964, temp read=412 written=872\n -> Append (cost=0.00..3080.17 rows=68261 width=20) (actual time=0.010..25.441 rows=68179 loops=1)\n Buffers: shared hit=1013\n -> Subquery Scan on \"*SELECT* 1\" (cost=0.00..1018.43 rows=21568 width=20) (actual time=0.008..7.895 rows=21532 loops=1)\n Output: \"*SELECT* 1\".date, \"*SELECT* 1\".user_id\n Buffers: shared hit=372\n -> Seq Scan on org_storage_extra.stats_user_daily_counters (cost=0.00..802.75 rows=21568 width=20) (actual time=0.008..5.910 rows=21532 loops=1)\n Output: stats_user_daily_counters.user_id, stats_user_daily_counters.date\n Filter: (stats_user_daily_counters.type = ANY ('{created_file,created_folder,created_secure_fetch}'::text[]))\n Rows Removed by Filter: 9795\n Buffers: shared hit=372\n -> Subquery Scan on \"*SELECT* 2\" (cost=0.00..1720.44 rows=46693 width=20) (actual time=0.009..12.460 rows=46647 loops=1)\n Output: \"*SELECT* 2\".date, \"*SELECT* 2\".user_id\n Buffers: shared hit=641\n -> Seq Scan on ipfs_pinning_facility.stats_user_daily_counters stats_user_daily_counters_1 (cost=0.00..1253.51 rows=46693 width=20) (actual time=0.009..8.209 rows=46647 loops=1)\n Output: stats_user_daily_counters_1.user_id, stats_user_daily_counters_1.date\n Filter: (stats_user_daily_counters_1.type <> 'shares_viewed_by_others'::text)\n Rows Removed by Filter: 2354\n Buffers: shared hit=641\n -> Hash (cost=1583.59..1583.59 rows=200 width=16) (actual time=1411.250..1411.251 rows=71228 loops=1)\n Output: all_users.id\n Buckets: 131072 (originally 1024) Batches: 2 (originally 1) Memory Usage: 3073kB\n Buffers: shared hit=285995 read=1964, temp read=100 written=717\n -> HashAggregate (cost=1581.59..1583.59 rows=200 width=16) (actual time=1383.986..1398.270 rows=71228 loops=1)\n Output: all_users.id\n Group Key: all_users.id\n Batches: 5 Memory Usage: 4161kB Disk Usage: 1544kB\n Buffers: shared hit=285995 read=1964, temp read=100 written=560\n -> CTE Scan on all_users (cost=0.00..1405.86 rows=70293 width=16) (actual time=0.102..1351.241 rows=71228 loops=1)\n Output: all_users.id\n Buffers: shared hit=285995 read=1964, temp written=296\n SubPlan 2\n -> Aggregate (cost=1777.05..1777.06 rows=1 width=8) (actual time=20.197..20.197 rows=1 loops=231)\n Output: count(DISTINCT all_users_1.id)\n Buffers: temp read=68607 written=1\n -> CTE Scan on all_users all_users_1 (cost=0.00..1757.33 rows=7888 width=16) (actual time=0.883..10.874 rows=27239 loops=231)\n Output: all_users_1.id, all_users_1.registered_at, all_users_1.deleted_at\n Filter: ((all_users_1.registered_at < (((generate_series(date_trunc('week'::text, $5), date_trunc('week'::text, now()), '7 days'::interval)))::date)) AND ((all_users_1.deleted_at IS NULL) OR (all_users_1.deleted_at > (((generate_series(date_trunc('week'::text, $5), date_trunc('week'::text, now()), '7 days'::interval)))::date))))\n Rows Removed by Filter: 43989\n Buffers: temp read=68607 written=1\n SubPlan 3\n -> Aggregate (cost=1815.90..1815.91 rows=1 width=8) (actual time=11.215..11.215 rows=1 loops=231)\n Output: count(DISTINCT all_users_2.id)\n Buffers: temp read=68607\n -> CTE Scan on all_users all_users_2 (cost=0.00..1757.33 rows=23431 width=16) (actual time=11.009..11.150 rows=231 loops=231)\n Output: all_users_2.id, all_users_2.registered_at, all_users_2.deleted_at\n Filter: (all_users_2.deleted_at < ((((generate_series(date_trunc('week'::text, $5), date_trunc('week'::text, now()), '7 days'::interval)))::date) + '7 days'::interval))\n Rows Removed by Filter: 70997\n Buffers: temp read=68607\n SubPlan 4\n -> Aggregate (cost=1933.94..1933.95 rows=1 width=8) (actual time=14.515..14.515 rows=1 loops=231)\n Output: count(DISTINCT all_users_3.id)\n Buffers: temp read=68607\n -> CTE Scan on all_users all_users_3 (cost=0.00..1933.06 rows=351 width=16) (actual time=2.264..14.424 rows=308 loops=231)\n Output: all_users_3.id, all_users_3.registered_at, all_users_3.deleted_at\n Filter: ((all_users_3.registered_at >= (((generate_series(date_trunc('week'::text, $5), date_trunc('week'::text, now()), '7 days'::interval)))::date)) AND (all_users_3.registered_at < ((((generate_series(date_trunc('week'::text, $5), date_trunc('week'::text, now()), '7 days'::interval)))::date) + '7 days'::interval)))\n Rows Removed by Filter: 70920\n Buffers: temp read=68607\n SubPlan 5\n -> Aggregate (cost=1933.94..1933.95 rows=1 width=8) (actual time=6.556..6.556 rows=1 loops=231)\n Output: count(DISTINCT all_users_4.id)\n Buffers: temp read=68607\n -> CTE Scan on all_users all_users_4 (cost=0.00..1933.06 rows=351 width=16) (actual time=6.441..6.547 rows=5 loops=231)\n Output: all_users_4.id, all_users_4.registered_at, all_users_4.deleted_at\n Filter: ((all_users_4.deleted_at >= (((generate_series(date_trunc('week'::text, $5), date_trunc('week'::text, now()), '7 days'::interval)))::date)) AND (all_users_4.deleted_at < ((((generate_series(date_trunc('week'::text, $5), date_trunc('week'::text, now()), '7 days'::interval)))::date) + '7 days'::interval)))\n Rows Removed by Filter: 71223\n Buffers: temp read=68607\nPlanning Time: 0.612 ms\nExecution Time: 13615.054 ms\n\n", "AnswerId": "76384360", "AnswerBody": "An obvious optimization is to eliminate redundant table scans. There isn't any need in preprocessed to query from all_users more than once. The following query uses COUNT with FILTER to gather the same statistics:\nWITH datetable AS (SELECT GENERATE_SERIES(\n DATE_TRUNC('week', (SELECT MIN(created_at) FROM org_accounts.deleted_users)),\n DATE_TRUNC('week', NOW()),\n '1 week'::INTERVAL\n )::DATE AS week_start),\n all_users AS (SELECT id,\n registered_at,\n NULL AS deleted_at\n FROM org_accounts.users\n WHERE status = 'active'\n AND org_accounts.__user_is_qa(id) <> 'Y'\n AND email NOT LIKE '%@org%'\n UNION ALL\n SELECT id,\n created_at AS registered_at,\n deleted_at\n FROM org_accounts.deleted_users\n WHERE deleter_id = id\n AND email NOT LIKE '%@org%'),\n weekly_activity AS (SELECT DATE_TRUNC('week', date)::DATE AS week_start,\n COUNT(DISTINCT user_id) AS weekly_active_users\n FROM (SELECT user_id, date\n FROM org_storage_extra.stats_user_daily_counters\n WHERE type IN ('created_file', 'created_folder', 'created_secure_fetch')\n UNION ALL\n SELECT user_id, date\n FROM ipfs_pinning_facility.stats_user_daily_counters\n WHERE type <> 'shares_viewed_by_others') activity_ids_dates\n WHERE EXISTS(SELECT 1 FROM all_users WHERE id = user_id)\n GROUP BY week_start),\n preprocessed AS (SELECT week_start,\n us.actual_users,\n us.cumulative_churned_users,\n us.weekly_new_users,\n us.weekly_churned_users,\n COALESCE(weekly_active_users, 0) AS weekly_active_users\n FROM datetable dt\n CROSS JOIN LATERAL (SELECT\n COUNT(DISTINCT u.id) FILTER (WHERE u.registered_at < dt.week_start AND\n (u.deleted_at IS NULL OR u.deleted_at > dt.week_start)) AS actual_users,\n COUNT(DISTINCT u.id)\n FILTER (WHERE u.deleted_at < dt.week_start + '1 week'::INTERVAL) AS cumulative_churned_users,\n COUNT(DISTINCT u.id)\n FILTER (WHERE u.registered_at >= dt.week_start AND u.registered_at <\n dt.week_start +\n '1 week'::INTERVAL) AS weekly_new_users,\n COUNT(DISTINCT u.id)\n FILTER (WHERE u.deleted_at >= dt.week_start AND u.deleted_at <\n dt.week_start +\n '1 week'::INTERVAL) AS weekly_churned_users\n FROM all_users u\n WHERE u.registered_at < dt.week_start + '1 week'::INTERVAL\n OR (u.deleted_at >= dt.week_start AND\n u.deleted_at < dt.week_start + '1 week'::INTERVAL)) us\n LEFT JOIN weekly_activity\n USING (week_start)\n ORDER BY week_start DESC)\nSELECT week_start AS for_week_of,\n actual_users + cumulative_churned_users AS cumulative_users,\n cumulative_churned_users,\n cumulative_churned_users::FLOAT /\n NULLIF((actual_users + cumulative_churned_users)::FLOAT, 0) AS cumulated_churn_rate,\n actual_users,\n weekly_new_users,\n weekly_churned_users,\n weekly_active_users,\n weekly_churned_users::FLOAT / NULLIF(actual_users::FLOAT, 0) AS weekly_churn_rate\n FROM preprocessed;\n\nThere are probably other optimizations possible, but this one was immediately evident.\n" }, { "QuestionId": "76378322", "QuestionTitle": "How can I convert an int to a generic type containing complex128 in Go?", "QuestionBody": "I cannot work out how to convert an int to a generic type containing complex128. Here is an example which doesn't compile unless the complex128 is commented out:\npackage main\n\nimport \"fmt\"\n\ntype val interface {\n int64 | float64 | complex128\n}\n\nfunc f[V val](a, b V) (c V) {\n q := calc()\n return a * b * V(q)\n}\n\nfunc calc() int {\n // lengthy calculation that returns an int\n return 1\n}\n\nfunc main() {\n fmt.Printf(\"%v\\n\", f(int64(1), int64(2)))\n}\n\nThis is simplified from a much larger calculation. I've tried using a switch but every syntax I have attempted seems to meet resistance of one kind or another.\nHow can I multiply a and b with an integer?\nI have tried using a switch on the type of the return variable such as any(c).(type) but for example if I have case complex128: then it refuses to allow the complex builtin since it doesn't return a V.\nWithout the complex128 the above will compile.\n", "AnswerId": "76378801", "AnswerBody": "This one works but it needs to list every type in the switch statement:\nfunc f[V val](a, b V) (c V) {\n q := calc()\n\n var temp any\n switch any(c).(type) {\n case complex128:\n temp = complex(float64(q), 0)\n case int64:\n temp = int64(q)\n default:\n temp = float64(q)\n }\n return a * b * (temp.(V))\n}\n\n" }, { "QuestionId": "76378721", "QuestionTitle": "Hide 'Display Cart' button in WooCommerce mini cart widget", "QuestionBody": "enter image description here\nin wordpress and woocommerce Plugin\nis there anyway to hide \"Display Cart\" button in wordpress mini card widget ?\ni can hide \"checkout\" button individually but it seems theres no Special Css Class Fot \"Display Card\" buttun. ?!?!\n", "AnswerId": "76378829", "AnswerBody": "you can try this\n \nadd_action( 'woocommerce_widget_shopping_cart_buttons', 'bbloomer_remove_view_cart_minicart', 1 );\n \nfunction bbloomer_remove_view_cart_minicart() {\n remove_action( 'woocommerce_widget_shopping_cart_buttons', 'woocommerce_widget_shopping_cart_button_view_cart', 10 );\n}\n\nOR\n.widget .woocommerce-mini-cart__buttons a:not(.checkout) {\ndisplay: none;\n}\n\n" }, { "QuestionId": "76378661", "QuestionTitle": "The fastest way to convert a UInt64 hex string to a UInt32 value preserving as many leading digits as possible, i.e. truncation", "QuestionBody": "I'm looking for the fastest way to parse a hex string representing a ulong into a uint keeping as many leading digits as a uint can handle and discarding the rest. For example,\nstring hex = \"0xab54a9a1df8a0edb\"; // 12345678991234567899\nShould output: uint result = 1234567899;\nI can do this by simply parsing the hex into a ulong, getting the digits using ToString and then just taking as many of them as would fit into uint without overflowing but I need something much faster. Thanks. C# code preferred but any would do.\n", "AnswerId": "76378944", "AnswerBody": "For decimal truncation, all the high bits of the hex digit affect the low 9 or 10 decimal digits, so you need to convert the whole thing. Is there an algorithm to convert massive hex string to bytes stream QUICKLY? asm/C/C++ has C++ with SSE intrinsics. I commented there with some possible improvements to that, and to https://github.com/zbjornson/fast-hex . This could be especially good if you're using SIMD to find numeric literals in larger buffers, so you might have the hex string in a SIMD register already. (Not sure if SIMDJSON does that.)\nHex-string to 64-bit integer is something SIMD certainly can speed up, e.g. do something to map each digit to a 0-15 integer, combine pairs of bytes to pack nibbles (e.g. with x86 pmaddubsw), then shuffle those 8-bit chunks to the bottom of a register. (e.g. packuswb or pshufb). x86 at least has efficient SIMD to GP-integer movq rax, xmm0, although the ARM equivalent is slow on some ARM CPUs.\n(Getting a speedup from SIMD for ASCII hex -> uint is much easier if your strings are fixed-length, and probably if you don't need to check for invalid characters that aren't hex digits.)\n\nDecimal truncation of u64 (C# ulong) to fit in u32 (C# uint)\nModulo by a power of 10 truncates to some number of decimal digits.\n(uint)(x % 10000000000) works for some numbers, but 10000000000 (1e10 = one followed by 10 zeros) is larger than 2^32-1. Consider an input like 0x2540be3ff (9999999999). We'd get (uint)9999999999 producing 1410065407 = 0x540be3ff (keeping the low 32 bits of that 34-bit number.)\nSo perhaps try modulo 1e10, but if it's too big for u32 then modulo 1e9.\n ulong tendigit = x % 10000000000; // 1e10\n uint truncated = tendigit <= (ulong)0xffffffff ? tendigit : (x % 1000000000); // % 1e9 keeps 9 decimal digits\n\nIf this isn't correct C# syntax or the literals need some decoration to make them ulong (like C 10000000000uLL for good measure), please let me know.\nIt's probably at least as efficient to just modulo the original number two different ways than to try to get the leading decimal digit of x % 1e10 and subtract it or whatever. The asm is going to need two 64-bit multiplicative inverse constants, and starting from the original number again keeps critical-path latency shorter for out-of-order exec if branch prediction predicts that it needs to calculate the nine-digit truncation.\n\nBinary truncation\n@Matthew Whited deleted his answer (due to a bug in the decimal truncation part), but his binary truncation part based on substrings of the original hex input could perhaps be more efficient in some cases than doing the full conversion and then casting to a narrower type or masking with AND.\n\nIf you want the last 8 bytes of the hex string\nuint.Parse(hex[^8..],NumberStyles.HexNumber)\n\nIf you want the first 8 bytes\nuint.Parse(hex[2..10], NumberStyles.HexNumber);\n\n\n" }, { "QuestionId": "76384304", "QuestionTitle": "How to backup a full project of firebase", "QuestionBody": "I am faced a problem. I have a project which is in firebase. I have used there firebase Authenticate, Firebase realtime database, Firebase function and some more. Now I have changed my decision. I want to make my own server where I will set up and manage everything.\nSo that I want to backup my project to move all data to other framework like spring boot project.\nIn this situation how can I get the whole project? User Auth data, Firebase Realtime database, Firestore etc.\n", "AnswerId": "76384361", "AnswerBody": "You'll have to write code or use the CLI to query all of the data you want, and write it to a place you want. Firebase does not provide a tool to do all this automatically for an entire project. You will need to deal with each product's data separately.\nYou can use the Firebase Admin SDK or the Firebase CLI to access data from the products you listed.\nSee also:\n\nIs it possible to backup Firebase DB?\nhttps://firebase.google.com/docs/firestore/manage-data/export-import\nhttps://firebase.google.com/docs/cli/auth\n\n" }, { "QuestionId": "76378577", "QuestionTitle": "Why does ruby recognise a method outside of a class, but not inside?", "QuestionBody": "I am trying to build a simple language translating program. I imported the 'language_converter' gem to aid with this goal. I wrote the following code:\nrequire 'language_converter'\n\nclass Translator\n def initialize\n @to = 'ja'; \n @from = 'en'; \n end\n\n def translate text\n lc(text, @to,@from)\n end\nend\n\n#puts lc('welcome to Japan!', 'ja','en');\n \nt = Translator.new\n\np t.translate('welcome to Japan!');\n\nThis code results in the error: undefined method 'lc' for # (NoMethodError)\nHowever, when i uncomment the code on line 15, ruby can access the lc method and return some japanese. Does anyone know why the method is 'defined' outside of the class but not inside?\nEdit: the language-converter gem is not my own. also, I cannot find the source code on its homepage.\nI have also tried adding two semicolons before the lc method like so: ::lc(text, @to,@from). This results in the error: syntax error, unexpected local variable or method, expecting constant\n", "AnswerId": "76378945", "AnswerBody": "The gem is more than 10 years old and only has one method. And that method is implemented as a class method.\nYou are properly better off with just rewriting that method in your application with a modern Ruby syntax and proper error handling.\nFor reference, this it how lib/language_converter.rb in the gem looks like:\nrequire 'net/http'\nrequire 'rubygems'\nrequire \"uri\"\nrequire 'json'\n\nclass UnSupportedLanguage < RuntimeError\n\n def initialize(message='')\n @msg = \"not supported.\"\n end\nend\n\n\n def self.lc( text, to, from='en' )\n\n begin\n\n uri = URI.parse(\"http://mymemory.translated.net/api/get\")\n\n response = Net::HTTP.post_form(uri, {\"q\" => text,\"langpair\"=>\"#{from.to_s.downcase}|#{to.to_s.downcase}\", \"per_page\" => \"50\"})\n\n json_response_body = JSON.parse( response.body )\n\n if json_response_body['responseStatus'] == 200\n json_response_body['responseData']['translatedText']\n else\n puts json_response_body['responseDetails']\n raise StandardError, response['responseDetails']\n end\n rescue UnSupportedLanguage\n raise UnSupportedLanguage.new\n rescue => err_msg\n puts \"#{err_msg}\"\n end\n\n end\n\n" }, { "QuestionId": "76384270", "QuestionTitle": "How to override parent's styles in css?", "QuestionBody": "In this example, I want the purple rectangle to change its opacity to 100% regardless of the value of the parent. I tried using all: unset/initial and !important but it doesn't seem to work.\n\n\n.rect {\n width: 500px;\n height: 600px;\n margin-top: 200px;\n margin-left: 300px;\n background-color: black;\n /* this V */\n opacity: 37%;\n z-index: -1;\n}\n\n.rect1 {\n all: unset;\n position: absolute;\n z-index: 10;\n width: 259px;\n height: 300px;\n margin-top: 500px;\n margin-left: 50px;\n background-color: purple;\n /* to this V */\n opacity: 100% !important;\n}\n
\n
\n
\n\n\n\n", "AnswerId": "76384365", "AnswerBody": "So like Haworth pointed out, using opacity on the element itself brings all children under the influence of the pixelshading used to make the opacity effect.\nIf you want to get the same effect while retaining your html structure I'd recommend a different approach for the same result using RGBA or hex with an alpha channel on the background-color property directly. See example below.\n\n\nbody {\n height: 100%;\n width: 100%;\n background: url(https://picsum.photos/800) no-repeat;\n background-size: cover;\n}\n\n.rect {\n width: 500px;\n height: 600px;\n margin-top: 200px;\n margin-left: 300px;\n background-color: rgba(0,0,0,.37);\n /* this V\n opacity: 37%;*/\n z-index: -1;\n}\n\n.rect1 {\n position: absolute;\n z-index: 10;\n width: 259px;\n height: 300px;\n margin-top: 500px;\n margin-left: 50px;\n background-color: purple;\n /* to this V */\n opacity: 100% !important;\n}\n
\n
\n
\n\n\n\n" }, { "QuestionId": "76378347", "QuestionTitle": "How to generate a log file of the windows prompt when I run a bat file", "QuestionBody": "I'm running a bat file in windows. I'm trying to generate a log file of all the output that appears in the command prompt, to have as a document.\nNote, Not a log file of the contents of the bat file but of the command prompt that it outputs.\nHow would I do this? Thanks\n", "AnswerId": "76378974", "AnswerBody": "Redirecting to output is done by using > or appending to file using >>\nfor batch-file, we typically call them.\n(call script.cmd)2>&1>\"logfile.log\"\n\nor append\n(call script.cmd)2>&1>>\"logfile.log\"\n\nNote, 2>&1 2>&1 is redirecting the stderr stream 2 to the stdout stream 1, it is important here, seeing as you said you want to log all of the output results to logfile.\nSo that should also give the clue that you can in fact redirect success (stdout) results to one file and failures (stderr) to another, i.e\n(call script.cmd) 1>\"Output.log\" 2>\"Errors.log\"\n\nNote, some commands and executables sends everything to the stdout stream and nothing to stderr, example ping.exe.\n" }, { "QuestionId": "76384255", "QuestionTitle": "calculate an object property based on the value of another property of the same object", "QuestionBody": "I need to find out the value of \"name\" inside on the obj object. How can I find it without function invocation?\nI wanna use just obj.isActive not obj.isActive()\nlet obj = {\n name: \"X Æ A-12 Musk\",\n isActive: function () {\n return this.name.length > 4;\n },\n};\n\n// and after a while I need to check if is active:\n\nconsole.log(obj);\n\n// { \n// name: 'X Æ A-12 Musk',\n// isActive: [Function: isActive] <--------- NOT COOL ! \n// }\n\n\n\nIf use an IFEE:\nlet obj = {\n name: \"X Æ A-12 Musk\",\n isActive: (function () {\n return this.name.length > 4;\n })(),\n};\n\nI get:\nreturn this.name.length > 4;\n ^\nTypeError: Cannot read properties of undefined (reading 'length') \n\n", "AnswerId": "76384399", "AnswerBody": "If you do not want to have to call isActive as a function, you can use a getter.\n\n\nconst obj = {\n name: \"X Æ A-12 Musk\",\n get isActive () {\n return this.name.length > 4;\n },\n};\n\nconsole.log(obj.isActive);\n\n\n\n" }, { "QuestionId": "76384220", "QuestionTitle": "Code to format JSON data and append hardcoded data to create a flat .txt file", "QuestionBody": "Source Data::\njson_data = [{\"studentid\": 1, \"name\": \"ABC\", \"subjects\": [\"Python\", \"Data Structures\"]},\n {\"studentid\": 2, \"name\": \"PQR\", \"subjects\": [\"Java\", \"Operating System\"]}]\n\nHardcoded_Val1 = 10\nHardcoded_Val2 = 20\nHardcoded_Val3 = str(datetime.datetime.now())\n\nNeed to create a flat .txt file with the below data.\nID,DEPT,\"studentid|name|subjects\",execution_dt\n10,20,\"1|ABC|Python,Data Structures\",2023-06-01\n10,20,\"2|PQR|Java,Operating System\",2023-06-01\n\nI am very new in python. Have already tried to figure it out to achieve it but couldn't. Your help will be much appreciated.\nimport datetime\nimport pandas as pd\nimport json\n\n\njson_data = [{\"studentid\": 1, \"name\": \"ABC\", \"subjects\": [\"Python\", \"Data Structures\"]},\n {\"studentid\": 2, \"name\": \"PQR\", \"subjects\": [\"Java\", \"Operating System\"]}]\n\nHardcoded_Val1 = 10\nHardcoded_Val2 = 20\nHardcoded_Val3 = str(datetime.datetime.now())\n\nprofile = str(Hardcoded_Val1) + ',' + str(Hardcoded_Val2) + ',\"' + str(json_data) + '\",' + Hardcoded_Val3\n \nprint(profile)\n#data = json.dumps(profile, indent=True)\n#print(data)\ndata_list = []\nfor data_info in profile:\n data_list.append(data_info.replace(\", '\", '|'))\ndata_df = pd.DataFrame(data=data_list)\ndata_df.to_csv(r'E:\\DataLake\\api_fetched_sample_output.txt', sep='|', index=False, encoding='utf-8')\n\n\n", "AnswerId": "76384400", "AnswerBody": "I would bypass using pandas for this and just build the string manually primarily using a list comprehension and join().\nimport datetime\nimport csv\n\nHardcoded_Val1 = 10\nHardcoded_Val2 = 20\nHardcoded_Val3 = str(datetime.date.today())\njson_data = [\n {\"studentid\": 1, \"name\": \"ABC\", \"subjects\": [\"Python\", \"Data Structures\"]},\n {\"studentid\": 2, \"name\": \"PQR\", \"subjects\": [\"Java\", \"Operating System\"]}\n]\n\ncsv_data = []\nfor row in json_data:\n keys = \"|\".join(row.keys())\n values = \"|\".join([\n \",\".join(value) if isinstance(value, list) else str(value)\n for value in row.values()\n ])\n csv_data.append(dict([\n (\"ID\", Hardcoded_Val1),\n (\"DEPT\", Hardcoded_Val2),\n (keys, values),\n (\"execution_dt\", Hardcoded_Val3)\n ]))\n\nwith open(\"out.csv\", \"w\", encoding=\"utf-8\", newline=\"\") as file_out:\n writer = csv.DictWriter(file_out, fieldnames=list(csv_data[0].keys()))\n writer.writeheader()\n writer.writerows(csv_data)\n\nThis will produce a file with the following contents:\nID,DEPT,studentid|name|subjects,execution_dt\n10,20,\"1|ABC|Python,Data Structures\",2023-06-02\n10,20,\"2|PQR|Java,Operating System\",2023-06-02\n\n" }, { "QuestionId": "76380911", "QuestionTitle": "Expect function Parameter to be Key of Object with Dynamic Properties", "QuestionBody": "Im making an application multi Language.\nI want to build typing as strict and simpel as possible. My Code is the following:\n//=== Inside my Hook: ===//\ninterface ITranslation {\n [key:string]:[string, string]\n}\n\nconst useTranslator = (translations:ITranslation) => {\n const language = useLanguage() // just getting the language setting from another hook\n\n const translate = (key:keyof typeof translations) => {\n // mapping and returning the right translation\n }\n\n return translate;\n}\n\n\n//=== Inside the component: ===//\nconst translation:ITranlation = {\n \"something in english\": [ \"something in german\", \"something in spanish\" ],\n \"anotherthing in english\": [\"anotherthing in german\", \"anotherthing in spanish\"]\n}\n\nconst translate = useTranslation(translation)\n\nreturn(\n {translate(\"something in english\")}\n)\n\n\nWhat i want to achieve:\n\nWhen passing the translation Object, with Dynamic Keys to the Hook: useTranslation(translations), there should be a typecheck validating, that both languages are provided (any property has an Array with 2 Strings)\n\nWhen using the translate function (inside the Text component) typescript should bring an error, if a key is not matching the Dynamic Keys inside the translations object. So this should throw an error: tranlate(\"not a key in object\")\n\n\nBut i can't get it to work properly. I can either set the translations object as const, but then there is no typecheck when passing the object to the Hook.\nOr i set it as shown above with translation:ITranslation but then there is no typechecking for the parameter in the ´translate´ function inside the component.\nIs it possible to achive that? (If yes, how?)\nThanks in advance!\n", "AnswerId": "76381092", "AnswerBody": "This solution will work only for Typescript >= 4.9 since it uses the satisfies operator introduced in the 4.9.\nAdding as const is the approach we will go with, and satisfies will allow us to type-check it.\nconst translation = {\n 'something in english': ['something in german', 'something in spanish'],\n 'anotherthing in english': ['anotherthing in german', 'anotherthing in spanish'],\n} as const satisfies ITranslation;\n\nSince we added as const the values in the ITranslation will be readonly [string, string], thus we have to update the ITranslation to the following:\ninterface ITranslation {\n [key: string]: readonly [string, string];\n}\n\nNext, we need to add a generic parameter to useTranslator so it works over the specific instance of ITranslation. The same goes for the translate function. It should accept the generic parameter for the key of ITranslation and return the value for that specific key:\nconst useTranslator = (translations: T) => {\n const language = useLanguage(); // just getting the language setting from another hook\n\n const translate = (key: K): T[K][number] => {\n // return retrieved value\n };\n\n return translate;\n};\n\nSince it is not asked in the question translate will return a union of the translations for the specific key, which is achieved by T[K][number]\nUsage:\nconst Component = () => {\n const translate = useTranslator(translation);\n \n // \"something in german\" | \"something in spanish\"\n const case1 = translate('something in english');\n\n // \"anotherthing in german\" | \"anotherthing in spanish\"\n const case2 = translate( 'anotherthing in english');\n\n return null;\n};\n\nplayground\n" }, { "QuestionId": "76381023", "QuestionTitle": "jquery above and below screen sizes", "QuestionBody": "I have added a script for showing a div before different divs in different screen size. This is the code I used:\njQuery(function($){ \njQuery(document).ready(function(){\n jQuery(window).on('resize', function(){\n if(jQuery(window).width() <= 1024){\n jQuery( \".checkout.woocommerce-checkout .woocommerce-shipping-fields__wrapper\" ).insertBefore( \".checkout.woocommerce-checkout .flux-step.flux-step--2 .flux-checkout__shipping-table\" );\n }\n else if(jQuery(window).width() >= 1025){\n jQuery( \".checkout.woocommerce-checkout .woocommerce-shipping-fields__wrapper\" ).insertBefore( \".checkout.woocommerce-checkout .flux-checkout__content-right #order_review\" );\n }\n }); \n}); \n});\n\nBut the code is not working when I open the site. It only works if I resize the screen. May be due to the resize function is used.\nCan anyone please guide me how to make it so that it'll show the 2 conditions even without resizing the screen and one'll work above 1024px and another below 1024px.\nTIA\n", "AnswerId": "76381114", "AnswerBody": "Just put your code in a function and call it on the document ready:\n\n\n$(function(){\n \n resize();\n \n $(window).on('resize', resize);\n\n function resize(){\n $( \".checkout.woocommerce-checkout .woocommerce-shipping-fields__wrapper\" )\n .insertBefore(\n $(window).width() <= 1024 ? \n \".checkout.woocommerce-checkout .flux-step.flux-step--2 .flux-checkout__shipping-table\" : \n \".checkout.woocommerce-checkout .flux-checkout__content-right #order_review\"\n );\n }\n \n}); \n\n\n\n" }, { "QuestionId": "76378620", "QuestionTitle": "How is arbitrary distributed for Int? Why is it limited by so small values?", "QuestionBody": "I am trying to compare the QuickCheck library to the SmallCheck one. In SmallCheck I can reach particular value manipulating depth parameter. In QuickCheck:\n>a<-generate (replicateM 10000 arbitrary) :: IO [Int]\n>length a\n10000\n>maximum a\n30\n\nand my question then is: why are 10,000 \"random\" (\"arbitrary\") integers limited by 30?! I expected to see more \"widely\" distributed values within the range 0..10,000, maybe the maximum value close to 5,000.\n", "AnswerId": "76378997", "AnswerBody": "The documentation contains a clue:\n\nThe size passed to the generator is always 30\n\nBy default QuickCheck works by starting with 'easy' or 'small' inputs to see if it can find counterexamples with those. Only if it finds no problems with the small inputs does it gradually widen the range of generated input. The size value (which runs implicitly throughout everything that QuickCheck does) is the value that controls this behaviour.\nWhen you run QuickCheck (e.g. with quickCheck) it automatically increases the size as it goes.\nYou're not really supposed to use the generate function directly, but if you do, you can resize it:\nghci> b <- generate (replicateM 10000 (resize 60 arbitrary)) :: IO [Int]\nghci> maximum b\n60\n\nThat said, how are you supposed to use QuickCheck? The documentation describes quickCheck along with a multitude of variations you can use to evaluate properties.\nPersonally, I integrate my QuickCheck properties with a unit testing framework with testProperty. You can see examples here: Property-based testing is not the same as partition testing.\n" }, { "QuestionId": "76384387", "QuestionTitle": "How can I resolve the TypeScript error 'Function lacks ending return statement and return type does not include 'undefined'' in my code?", "QuestionBody": "I have the following simple function (make) that calls the handle function and is supposed to retry a number of times whenever that function throws. If the retries are exhausted, the make function should throw the error.\nconst handle = async (): Promise => 'hi';\n\nconst make = async (): Promise => {\n const MAX_RETRIES = 2;\n for (let idx = 0; idx <= MAX_RETRIES; idx++) {\n try {\n return await handle();\n } catch (err) {\n if (idx < MAX_RETRIES) {\n continue;\n } else {\n throw err;\n }\n }\n }\n};\n\nI'm using TypeScript, which is complaining because the return type doesn't include undefined:\n\nFunction lacks ending return statement and return type does not include 'undefined'.\n\nFor reference, this is the TS Playground for the code above.\nI'm looking for guidance on how to handle the return type for the function.\nNote that:\n\nI don't want to change my tsconfigs (currently set to strict)\nI don't want to modify the return type to Promise\n\nMy understanding is that the make function can only either return a string (inside the try block) or throw an error once the retries have been exhausted. If that's the case then where does the undefined that TS is asking for comes from? Am I missing something?\n", "AnswerId": "76384457", "AnswerBody": "\nMy understanding is that the make function can only either return a string (inside the try block) or throw an error once the retries have been exhausted.\n\nI'm fairly sure you're right, but TypeScript can't quite follow logic that complex, so it (incorrectly, I think) sees a path through the function that doesn't do an explicit return and so implicitly returns undefined (wrapped in a promise).\nYou can solve it in a few ways:\n\nAdd a return \"\"; at the end with a comment noting it'll never happen. (Blech.)\nAdd a throw new Error(\"Logic error, this will never be reached.\"); at the end.\nRewrite the function to make the final attempt more obviously a return-or-throw situation by using < instead of <= and then repeating the return await handle(); at the end. (Not great to have to repeat it, but it's very simple.)\n\nI don't think #1 or #2 need examples, but here's what #3 might look like:\nconst make = async (): Promise => {\n const MAX_RETRIES = 2;\n for (let idx = 0; idx < MAX_RETRIES; idx++) {\n try {\n return await handle();\n } catch (err) {\n continue; // I guess technically we don't need this, since\n // the loop doesn't do anything else\n }\n }\n return await handle();\n};\n\nFor me, #2 is the winner (or jcalz's rewrite), but any of them will make TypeScript happy, it's really a style choice.\n" }, { "QuestionId": "76384356", "QuestionTitle": "How can i get all the metrics where two label have same values using promql?", "QuestionBody": "I am new to promql. So not sure if promql supports my requirement or not.\nmax_over_time(cbnode_systemstats_cpu_utilization_rate{instance=\"a\",node=\"a\"}[6h])\n\nThis above query gives me result of max cpu utilization in past 6 hr for instance a single instnace a.\nHowever I want a query which fetches all metrics for all the instances where instance and node has same value. Something similar to below:\nmax_over_time(cbnode_systemstats_cpu_utilization_rate{instance = node}[6h])\n\n", "AnswerId": "76384460", "AnswerBody": "There is no easy elegant way to do that.\nBut you can utilize label_replace, logic of label matching for binary operations and a pinch of ingenuity.\nlabel_replace(cbnode_systemstats_cpu_utilization_rate{}, \"pseudoid\", \"$1\", \"instance\", \"(.*)\")\n == label_replace(cbnode_systemstats_cpu_utilization_rate{}, \"pseudoid\", \"$1\", \"node\", \"(.*)\")\n\nHere we add to LHS metric new label called pseudoid with value of instance, and same for RHS, but with value of node.\nResult will be returned only if all labels are the same, and in turn it will mean that instance == pseudoid == node.\nDemo of similar query can be seen here.\nNotice that since it is not the instant vector selector, you'll need to use subquery syntax to pass it into max_over_time.\nYou resulting query should look like this:\nmax_over_time(\n (\n label_replace(cbnode_systemstats_cpu_utilization_rate{}, \"pseudoid\", \"$1\", \"instance\", \"(.*)\")\n == label_replace(cbnode_systemstats_cpu_utilization_rate{}, \"pseudoid\", \"$1\", \"node\", \"(.*)\")\n )[6h:]\n)\n\n" }, { "QuestionId": "76381015", "QuestionTitle": "How to Save data to the Database using Laravel 8?", "QuestionBody": "This is ItemManufactureController file\nclass ItemManufactureController extends Controller\n{\n public function index(){\n\n return view('item_manufacture');\n }\n // Save category data into the database\n public function store(Request $request){\n\n $newManufacture = new ItemManufacture;\n $newManufacture->name = $request->input('txtManufactureName');\n $newManufacture->status = $request->input('status', 'available');\n\n dd($newManufacture);\n\n $newManufacture->save();\n return redirect('/item_manufacture');\n\n }\n}\n\nThis is item_manufacture.blade.php file\n{{--this page add to layout --}}\n@extends('layout.layout_01')\n\n{{--identity the content form the layout--}}\n@section('content')\n\n
\n
\n
\n\n
\n
\n
Add Item Manufacture Details
\n
\n\n
\n \n
\n \n
\n
\n\n
\n\n
\n\n
\n
\n\n ADD\n
\n
\n
\n\n
\n
\n\n
\n\n@endsection\n\nThis is route file\n\n//save manufacture\nRoute::get('/save_manufacture', [ItemManufactureController::class, 'store'])->name('saveManufacture');\nRoute::get('/item_manufacture', function (){\n return view('pages.item_manufacture');\n});\n\nThis is Model file\nclass ItemManufacture extends Model\n{\n use HasFactory;\n// public $timestamps=false;\n protected $connection = 'mysql';\n protected $primaryKey = 'id';\n protected $table = 'item_manufacture';\n protected $fillable = [\n 'name',\n 'status'];\n}\n\nwhen add data into the form and click \"ADD\" button array comes null value\nI used Laravel 8 framework, when I add data into the input field of the item_manufacture form, data will not pass the array. If there any error of my code, Please correct it.\nHow to save data and get values from the input fields using Laravel framework?\n", "AnswerId": "76381115", "AnswerBody": "Please make your route as post since you're storing the data and change your route in chaining name() method as saveManufacture.store\nRoute::post('/save_manufacture', [ItemManufactureController::class, 'store'])->name('saveManufacture.store');\n\nAnd in your blade file wrap your inputs inside form tag and set named route in your action.\nAnd then replace a tag (anchor tag) with input type submit since we have added action in our form tag. so your blade file will look this.\n{{--this page add to layout --}}\n@extends('layout.layout_01')\n\n{{--identity the content form the layout--}}\n@section('content')\n
\n
\n
\n
\n
\n
Add Item Manufacture Details
\n
\n
\n
\n \n
\n \n
\n
\n
\n
\n
\n
\n \n
\n
\n
\n
\n
\n
\n
\n@endsection\n\nNow you'll able to get the request param in your store() function, please try to debug dd($request->post());\n" }, { "QuestionId": "76378362", "QuestionTitle": "Prevent webpack from auto-incrementing project version", "QuestionBody": "I am working with a chrome extension which uses webpack to build.\nTo build I use this : cross-env NODE_ENV=production yarn webpack -c webpack.config.js --mode production\nwebpack.config.js\nconst HTMLPlugin = require('html-webpack-plugin');\nconst CopyPlugin = require('copy-webpack-plugin');\nconst path = require('path');\nconst UglifyJSPlugin = require('uglifyjs-webpack-plugin');\nconst BrowserExtensionPlugin = require(\"extension-build-webpack-plugin\");\n\nmodule.exports = {\n entry: {\n options: './src/options.tsx',\n popup: './src/popup.tsx',\n content: './src/content.tsx',\n background: './src/background.tsx',\n },\n output: {\n filename: '[name].js',\n path: path.resolve(__dirname, 'build'),\n },\n resolve: {\n extensions: ['.js', '.jsx', '.ts', '.tsx', '.css'],\n modules: [path.resolve(__dirname, 'src'), 'node_modules'],\n alias: {\n react: 'preact/compat',\n 'react-dom': 'preact/compat',\n },\n },\n module: {\n rules: [\n {\n test: /\\.(tsx|jsx|ts|js)x?$/,\n exclude: /node_modules/,\n use: [\n {\n loader: 'babel-loader',\n options: {\n presets: [\n \"@babel/preset-env\",\n \"@babel/preset-react\",\n \"@babel/preset-typescript\",\n ],\n },\n },\n ],\n },\n {\n test: /\\.svg$/,\n use: ['@svgr/webpack'],\n },\n ],\n },\n plugins: [\n new HTMLPlugin({\n chunks: ['options'],\n filename: 'options.html',\n title: 'Options page title',\n }),\n new HTMLPlugin({\n chunks: ['popup'],\n filename: 'popup.html',\n }),\n new CopyPlugin([\n { from: './src/_locales/', to: './_locales' },\n { from: './src/assets', to: './assets' },\n { from: './src/manifest.json', to: './manifest.json' },\n ]),\n new BrowserExtensionPlugin({devMode: false, name: \"build/chromium.zip\", directory: \"src\", updateType: \"minor\"}),\n ],\n optimization: {\n minimizer: [\n new UglifyJSPlugin({\n uglifyOptions: {\n compress: {\n drop_console: true,\n drop_debugger: true,\n }\n }\n })\n ]\n },\n mode: 'production',\n stats: 'minimal',\n performance: {\n hints: false,\n maxEntrypointSize: 512000,\n maxAssetSize: 512000\n }\n};\n\n\nmanifest.json:\n{\n \"manifest_version\": 3,\n \"name\": \"__MSG_appName__\",\n \"description\": \"__MSG_appDesc__\",\n \"default_locale\": \"en\",\n \"version\": \"0.1.0\",\n ....\n ....\n}\n\nIf I run cross-env NODE_ENV=production yarn webpack -c webpack.config.js --mode production again it increments the version from 0.1.0 to 0.2.0 automatically not just in build folder but in src folder as well. How can I prevent this auto increment functionality.\nI suspect it's due to one of the webpack plugins I am using.\n", "AnswerId": "76378998", "AnswerBody": "This is caused by extension-build-webpack-plugin which you really shouldn't have struggled to find, as there's a total of 4 plugins there to look at.\nNo, it does not offer any method of avoiding version bumps. You can only configure if you want it to bump the major or minor version number, defaulting to minor.\nIt's a really weird library to be using, it gets few downloads and is unmaintained. There's probably better alternatives out there.\n" }, { "QuestionId": "76384281", "QuestionTitle": "Javascript: run specific parts of strings through a function", "QuestionBody": "I want to parse some data that's in a string format. Anything enclosed in parenthesis in the string to parse should be replaced with itself run through a function. This is what I want:\nfunction foo(str) {\n return parseInt(str) + 1; // Example function, not actually what the function will be\n}\n\nfunction parse(str) {\n // everything in str that is enclosed in parenthesis should be replaced with itself ran through foo();\n\n // Example\n // Input: \"My name is foo and I am (0) year old.\"\n // Output: \"My name is foo and I am 1 year old.\"\n // \"(0)\" has been replaced with the result of foo(\"0\")\n}\n\nI have thought up a couple bad workarounds, but I want something more robust. For example:\nfunction parse(str) {\n // Input: \"My name is foo and I am (0) year old.\"\n str = str.replaceAll(\"(\", \"${foo('\");\n str = str.replaceAll(\")\", \"')}\");\n str = \"`\" + str + \"`\"\n // Here str will be \"`My name is foo and I am ${foo(0)} year old.`\"\n // And I can use eval() or something to treat it like I've typed that\n}\n\nThis, however, is kind of a bad way of doing it.\nEDIT: I tested it, it works, but it is quite vulnerable.\nI can't think of anything else and I'm not very good with RegEx. (although I'd accept a solution using it)\n", "AnswerId": "76384463", "AnswerBody": "Here's what I would do. I would match the string with a RegEx that would match anything inside parenthesis in the string. With that, I would then use str.replaceAll() to replace the matched string with the result of the foo() function.\nconst regex = /\\((\\d*)\\)/gm;\n\nfunction foo(str) {\n return parseInt(str) + 1;\n}\n\nfunction parse(str) {\n \n // Loop all match the regex find in the string\n let m;\n while ((m = regex.exec(str)) !== null) {\n \n // This is necessary to avoid infinite loops with zero-width matches\n if (m.index === regex.lastIndex) {\n regex.lastIndex++;\n }\n \n // Replace all instance of the match with the operation of the match\n str = str.replaceAll(m[0], foo(m[1]))\n }\n return str;\n}\n\nlet p = parse('My name is foo and I am (0) year old and I want (54) apples');\n\n// The result will be: My name is foo and I am 1 year old and I want 55 apples\n\nWith that, you won't need to use eval() as it potentially pose a risk for your application.\nI hope that would work for you. If I missed anything, tell me, I will edit my answer.\n" }, { "QuestionId": "76381105", "QuestionTitle": "Find unique date from existing dataframe and make a new CSV with corresponding column values", "QuestionBody": "I have a time series every which looks like this :\n\n\n\n\nTime\nVolume every minute\n\n\n\n\n2023-05-25T00:00:00Z\n284\n\n\n2023-05-25T00:01:00Z\n421\n\n\n.\n.\n\n\n.\n.\n\n\n2023-05-27T23:58:00Z\n894\n\n\n2023-05-27T23:59:00Z\n357\n\n\n\n\nI have to make new CSV by iterating Time column finding unique date and making new columns with corresponding values of volume every minute. For example desired output:\n\n\n\n\nDate\nmin1\nmin2\n...\nmin1440\n\n\n\n\n2023-05-25\n284\n421\n...\n578\n\n\n2023-05-26\n512\n645\n...\n114\n\n\n2023-05-27\n894\n357\n...\n765\n\n\n\n\ni am able to fetch unique dates but after that i am clueless. please find my sample codes:\nimport pandas as pd\n\ntrain_data = pd.read_csv('date25to30.csv')\n\nprint(pd.to_datetime(train_data['time']).dt.date.unique())\n\n", "AnswerId": "76381147", "AnswerBody": "First add parameter parse_dates to read_csv for convert Time column to datetimes:\ntrain_data = pd.read_csv('date25to30.csv', parse_dates=['Time'])\n\nThen create minutes by converting HH:MM:SS to timedeltas by to_timedelta and Series.dt.total_seconds, divide 60 and add 1 because python count from 0:\nminutes = (pd.to_timedelta(train_data['Time'].dt.strftime('%H:%M:%S'))\n .dt.total_seconds()\n .div(60)\n .astype(int)\n .add(1))\n\nLast pass to DataFrame.pivot_table with DataFrame.add_prefix:\ndf = (train_data.pivot_table(index=train_data['Time'].dt.date,\n columns=minutes,\n values='Volume',\n aggfunc='sum').add_prefix('min'))\nprint (df)\nTime min1 min2 min1439 min1440\nTime \n2023-05-25 284.0 421.0 NaN NaN\n2023-05-27 NaN NaN 894.0 357.0\n\n" }, { "QuestionId": "76378633", "QuestionTitle": "Cannot properly hide the appbar title on scroll in flutter", "QuestionBody": "I want to hide the AppBar on scroll. The search icon is hidden properly and also the opacity decreases on scroll. But for the title, it is not working.\nimport 'package:flutter/material.dart';\nimport 'package:vet_mobile/screens/chat.dart';\nimport 'package:vet_mobile/screens/logot.dart';\n\nclass HomeScreen extends StatelessWidget {\n const HomeScreen({Key? key}) : super(key: key);\n\n @override\n Widget build(BuildContext context) {\n return DefaultTabController(\n length: 3,\n child: Scaffold(\n body: NestedScrollView(\n headerSliverBuilder: (BuildContext context, bool innerBoxIsScrolled) {\n return [\n SliverAppBar(\n title: Row(\n mainAxisAlignment: MainAxisAlignment.spaceBetween,\n children: [\n Text(\n 'WhatsApp',\n style: TextStyle(\n color: Theme.of(context).textTheme.bodyLarge!.color,\n ),\n ),\n IconButton(\n onPressed: () {},\n icon: Icon(\n Icons.search,\n color: Theme.of(context).textTheme.bodyLarge!.color,\n ),\n ),\n ],\n ),\n pinned: true,\n floating: true,\n elevation: 5,\n bottom: TabBar(\n indicatorSize: TabBarIndicatorSize.tab,\n indicatorWeight: 4,\n indicatorColor: Theme.of(context).textTheme.bodyLarge!.color,\n labelStyle:\n TextStyle(fontSize: 13, fontWeight: FontWeight.w600),\n labelColor: Theme.of(context).textTheme.bodyLarge!.color,\n unselectedLabelColor:\n Theme.of(context).textTheme.bodySmall!.color,\n dividerColor: Colors.transparent,\n tabs: const [\n Tab(text: 'CHATS'),\n Tab(text: 'STATUS'),\n Tab(text: 'CALLS'),\n ],\n ),\n ),\n ];\n },\n body: const TabBarView(\n children: [\n Center(child: LogoutScreen()),\n Center(child: ChatScreen()),\n Center(child: Text('Patient')),\n ],\n ),\n ),\n ),\n );\n }\n}\n\n\n\nAs we can see the opacity of the search button decreases slowly as I scroll down but not for the title.\nI tried using the preferred height, animation controller, but it messed up more.\n", "AnswerId": "76379006", "AnswerBody": "Seems that this effect does not work when you set a custom style. Remove the fixed style setting from here:\nText(\n 'PawCare',\n // remove this\n /*style: TextStyle(\n color: Theme.of(context).textTheme.bodyLarge!.color,\n ),*/\n),\n\nTo set the style of the title text, use the titleTextStyle configuration of SliverAppBar:\nSliverAppBar(\n titleTextStyle: TextStyle(\n color: Theme.of(context).textTheme.bodyLarge!.color),\n...\n\n" }, { "QuestionId": "76378657", "QuestionTitle": "Haskell: cache result of a function in pattern matching", "QuestionBody": "I have the following algebraic data type:\ndata Tree a = Empty | Node a (Tree a) (Tree a)\n deriving (Show, Eq)\n\nAlso, I have\ndata Step = StepL | StepR\n deriving (Show, Eq)\n\nNow, I need a function search that takes\n\na root of the tree\na target value t\n... and it must return a path of type [Step] leading to a node with value t. Also, if t is not present in the tree, search must return Nothing. Finally, the input is guaranteed to have the target value at most once.\n\nMy best effort, as of now, is:\nsearchHelper :: Eq a => a -> Tree a -> [Step] -> Maybe [Step]\nsearchHelper _ Empty _ = Nothing\nsearchHelper targetValue (Node nodeValue leftChild rightChild) stepsSoFar = \n if targetValue == nodeValue then Just stepsSoFar \n else if searchHelper targetValue leftChild (stepsSoFar ++ [StepL]) /= Nothing then searchHelper targetValue leftChild (stepsSoFar ++ [StepL])\n else if searchHelper targetValue rightChild (stepsSoFar ++ [StepR]) /= Nothing then searchHelper targetValue rightChild (stepsSoFar ++ [StepR])\n else Nothing\n\nsearch :: Eq a => a -> Tree a -> Maybe [Step]\nsearch targetValue root = searchHelper targetValue root []\n\nAs you can see, I call the searchHelper too often (else if searchHelper targetValue leftChild (stepsSoFar ++ [StepL]) /= Nothing then searchHelper targetValue leftChild (stepsSoFar ++ [StepL])). I need a machinery that would allow me to cache the results of searchHelper calls and use them\nin if ... then ... else.\nQ: How can I do it?\n", "AnswerId": "76379177", "AnswerBody": "The use of the word cache confused me, but if I understand the question correctly, the real problem is the repeated use of the same expression. That could certainly become a readability and maintainability issue in a larger code base, so is worthwhile addressing.\nFrom the context this looks like a 'toy problem'. There's nothing wrong with that - I play with plenty of those myself to learn new stuff. The reason I mention it, though, is that from this and other clues I gather that you're still a Haskell beginner. Again: nothing wrong with that, but it just means that I'm going to skip some of the slightly more advanced Haskell stuff.\nChecking for Nothing or Just like in the OP is rarely idiomatic Haskell. Instead you'd use pattern-matching or (more commonly) some of the higher-level APIs for working with Maybe (such as Functor, Applicative, Monad, etc.).\nThat said, I gather that this isn't quite what you need right now. In order to cut down on the duplication of expressions, you can use let..in syntax in Haskell:\nsearchHelper :: Eq a => a -> Tree a -> [Step] -> Maybe [Step]\nsearchHelper _ Empty _ = Nothing\nsearchHelper targetValue (Node nodeValue leftChild rightChild) stepsSoFar = \n if targetValue == nodeValue then Just stepsSoFar\n else\n let l = searchHelper targetValue leftChild (stepsSoFar ++ [StepL])\n in if l /= Nothing then l\n else\n let r = searchHelper targetValue rightChild (stepsSoFar ++ [StepR])\n in if r /= Nothing then r\n else Nothing\n\nThis enables you to 'declare' 'variables' l and r and reuse them.\nAs my lengthy preamble suggests, this still isn't idiomatic Haskell, but I hope it adresses the immediate question.\n" }, { "QuestionId": "76383893", "QuestionTitle": "Implement MultiKeyDict class in Python with alias() method for creating aliases. Existing code fails when original key is deleted. Need fix", "QuestionBody": "Python OOP problem\nMultiKeyDict class, which is almost identical to the dict class. Creating an instance of MultiKeyDict class should be similar to creating an instance of dict class:\nmultikeydict1 = MultiKeyDict(x=1, y=2, z=3)\nmultikeydict2 = MultiKeyDict([('x', 1), ('y', 2), ('z', 3)])\n\nprint(multikeydict1['x']) # 1\nprint(multikeydict2['z']) # 3\n\nA feature of the MultiKeyDict class should be the alias() method, which should allow aliases to be given to existing keys. The reference to the created alias should not differ from the reference to the original key, that is, the value has two keys (or more if there are several aliases) when the alias is created:\nmultikeydict = MultiKeyDict(x=100, y=[10, 20])\n\nmultikeydict.alias('x', 'z') # add key 'x' alias 'z'\nmultikeydict.alias('x', 't') # add alias 't' to key 'x'\nprint(multikeydict['z']) # 100\nmultikeydict['t'] += 1\nprint(multikeydict['x']) # 101\n\nmultikeydict.alias('y', 'z') # now 'z' becomes an alias of the key 'y'\nmultikeydict['z'] += [30]\nprint(multikeydict['y']) # [10, 20, 30]\n\nThe value must remain available by alias even if the original key was removed:\nmultikeydict = MultiKeyDict(x=100)\n\nmultikeydict.alias('x', 'z')\ndel multikeydict['x']\nprint(multikeydict['z']) # 100\n\nKeys must take precedence over aliases. If some key and alias are the same, then all operations when accessing them must be performed with the key:\nmultikeydict = MultiKeyDict(x=100, y=[10, 20])\n\nmultikeydict.alias('x', 'y')\nprint(multikeydict['y']) # [10, 20]\n\nI can't implement such a feature, please give me ideas how it can be done!!!\nmultikeydict = MultiKeyDict(x=100)\n\nmultikeydict.alias('x', 'z')\ndel multikeydict['x']\nprint(multikeydict['z']) # 100\n\nmy code does not work with this test\nmultikeydict = MultiKeyDict(x=100)\n\nmultikeydict.alias('x', 'z')\ndel multikeydict['x']\nprint(multikeydict['z']) #100 \n\nclass MultiKeyDict(dict):\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.aliases = {}\n\n def alias(self, key, alias):\n self.aliases[alias] = key\n\n def __getitem__(self, key):\n if key in self.aliases:\n key = self.aliases[key]\n return super().__getitem__(key)\n\n def __setitem__(self, key, value):\n if key in self.aliases:\n key = self.aliases[key]\n super().__setitem__(key, value)\n \n def __delitem__(self, key):\n if key in self.aliases:\n del self.aliases[key]\n super().__delitem__(key)\n\n\nmultikeydict = MultiKeyDict(x=100, y=[10, 20])\n\nmultikeydict.alias('x', 'z')\nmultikeydict.alias('x', 't')\nprint(multikeydict['z'])\nmultikeydict['t'] += 1\nprint(multikeydict['x'])\n\nmultikeydict.alias('y', 'z')\nmultikeydict['z'] += [30]\nprint(multikeydict['y'])\n\n", "AnswerId": "76384507", "AnswerBody": "Some remarks:\n\nAs the specification says that keys should have precedence over aliases (when both exist), you should first test key membership on self before looking in aliases. Your methods first check for membership in aliases...\n\nAs a value must continue to exist when a key is deleted for which there are still alias(es), I would suggest storing the values wrapped in a list (that just has that value as only member). This way you can reference that list in an alias entry. When the key is deleted, the alias will still have the reference to the list and can still act on it.\n\n\nHere is how that could look:\nclass MultiKeyDict(dict):\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.aliases = {}\n # wrap each value in a list of size 1:\n for key, value in self.items():\n super().__setitem__(key, [value])\n \n def alias(self, key, alias):\n self.aliases[alias] = super().__getitem__(key)\n\n def __getitem__(self, key):\n if key in self:\n return super().__getitem__(key)[0]\n return self.aliases[key][0]\n\n def __setitem__(self, key, value):\n if key in self:\n super().__getitem__(key)[0] = value\n elif key in self.aliases:\n self.aliases[key][0] = value\n else:\n super().__setitem__(key, [value])\n\n def __delitem__(self, key):\n if key in self:\n return super().__delitem__(key)\n del self.aliases[key]\n\n" }, { "QuestionId": "76381091", "QuestionTitle": "Narrow down literal unions based on previously used values", "QuestionBody": "The scenario is the following:\ntype Option = 'a' | 'b' | 'c' | 'd'\n\ntype Question = {\n message: string;\n options: Option[];\n default: Option // here's the issue\n}\n\nI want the default prop to be the one of the options used inside question.options. For example:\nconst q1: Question = {\n message: 'first question',\n options: ['a', 'b'],\n default: 'a'\n}\n\nconst q2: Question = {\n message: 'second question',\n options: ['c', 'd'],\n default: 'a' // I want this to give an error because 'a' is not in 'c' | 'd'\n}\n\nHow can I achieve this?\n", "AnswerId": "76381163", "AnswerBody": "It can be done just by using Question; however, it will be a complex type that will cause a horrible time for the compiler since it grows at the speed of power of two, and if you have more options (more than 10), the compiler will reach its limits and won't compile.\nInstead, I would suggest adjusting Question to accept the Option[] as a generic parameter and assign the type of the elements of that generic parameter to default:\ntype Question = {\n message: string;\n options: T;\n default: T[number];\n};\n\nLastly, we will need a generic function that would create a question for us:\nconst createQuestion = (question: Question) => question;\n\nUsage:\nconst q1 = createQuestion({\n message: \"first question\",\n options: [\"a\", \"b\"],\n default: \"a\",\n});\n\nconst q2 = createQuestion({\n message: \"second question\",\n options: [\"c\", \"d\"],\n default: \"a\", // Expected error\n});\n\nplayground\n" }, { "QuestionId": "76378693", "QuestionTitle": "How to create a transparent Material 3 NavigationBar in Flutter?", "QuestionBody": "I want to make my NavigationBar transparent. I have tried extendBody: true on Scafold with surfaceTintColor=Colors.transparent to the NavigationBar widget, but nothing changed.\n", "AnswerId": "76379413", "AnswerBody": "According to the document, SurfaceTintColor is the color of the surface tint overlay applied to the app bar's background color to indicate elevation.\nIf you want to make the AppBar transparent, just use the property backgroundColor instead.\nScaffold(\n extendBody: true,\n backgroundColor: Colors.white,\n appBar: AppBar(\n backgroundColor: Colors.transparent, // To make appBar transparent\n \n /// This is not necessary. You can play around \n /// to see surfaceTintColor when the AppBar is transaprent\n surfaceTintColor: Colors.redAccent,\n elevation: 3,\n title: Text(widget.title),\n ),\n),\n\nIt is also applied to NavigationBar\nbottomNavigationBar: NavigationBar(\n surfaceTintColor: Colors.amber, // not neccessary\n backgroundColor: Colors.transparent,\n destinations: [\n Icon(Icons.book, color: Colors.blue,),\n Icon(Icons.map, color: Colors.blue,),\n ],\n ),\n\n" }, { "QuestionId": "76378332", "QuestionTitle": "How to use tableone to change table percentage by row?", "QuestionBody": "I am use library(tableone) to make my descriptive statistics for multiple variables\nThis is my code:\nlibrary(tableone)\n\nmyVars <- c(\"class\", \"age\", \"Sex\", \"bmi\", \"bmi_category\",\n \"drink_freq\", \"smoke_yn\", \"edu_dummy\")\n\ncatVars <- c(\"class\", \"Sex\", \"bmi_category\",\n \"drink_freq\", \"smoke_yn\", \"edu_dummy\")\n\ntab1_inf <- CreateTableOne(vars = myVars, strata = \"NEWDI\",\n data = TKA_table1, factorVars = catVars)\n\na1 <- print(tab1_inf, exact = \"NEWDI\", showAllLevels = TRUE)\n\nThis it default for percentage, and I want change it format like this(example):\n\nI checked its description and found no options to set.\nhttps://rdrr.io/cran/tableone/man/print.TableOne.html\nHow can I do it?\n", "AnswerId": "76379520", "AnswerBody": "With some clever getting-your-hands dirty, you can manipulate the percentages in the TableOne object. This uses an example dataset called pbc from survival package.\nlibrary(tableone)\nlibrary(survival)\ndata(pbc)\n\n## Make categorical variables factors\nvarsToFactor <- c(\"status\",\"trt\",\"ascites\",\"hepato\",\"spiders\",\"edema\",\"stage\")\npbc[varsToFactor] <- lapply(pbc[varsToFactor], factor)\n\n## Create a variable list\nvars <- c(\"time\",\"status\",\"age\",\"sex\",\"ascites\",\"hepato\",\n \"spiders\",\"edema\",\"bili\",\"chol\",\"albumin\",\n \"copper\",\"alk.phos\",\"ast\",\"trig\",\"platelet\",\n \"protime\",\"stage\")\n\n## Create Table 1 stratified by trt\ntableOne <- CreateTableOne(vars = vars, strata = c(\"trt\"), data = pbc)\n\ntableOne\n\nBefore\n Stratified by trt\n 1 2 p test\n n 158 154 \n time (mean (SD)) 2015.62 (1094.12) 1996.86 (1155.93) 0.883 \n status (%) 0.894 \n 0 83 (52.5) 85 (55.2) \n 1 10 ( 6.3) 9 ( 5.8) \n 2 65 (41.1) 60 (39.0) \n age (mean (SD)) 51.42 (11.01) 48.58 (9.96) 0.018 \n sex = f (%) 137 (86.7) 139 (90.3) 0.421 \n ascites = 1 (%) 14 ( 8.9) 10 ( 6.5) 0.567 \n hepato = 1 (%) 73 (46.2) 87 (56.5) 0.088 \n spiders = 1 (%) 45 (28.5) 45 (29.2) 0.985 \n...\n\nYou should try to adapt the following code for your own data format:\nfor (i in 1:length(table1)) {\n sum = tableOne$CatTable[[1]][[i]]$freq + tableOne$CatTable[[2]][[i]]$freq\n tableOne$CatTable[[1]][[i]]$percent = tableOne$CatTable[[1]][[i]]$freq / sum\n tableOne$CatTable[[2]][[i]]$percent = tableOne$CatTable[[2]][[i]]$freq / sum\n }\n}\n\ntableOne\n\nAfter\n Stratified by trt\n 1 2 p test\n n 158 154 \n time (mean (SD)) 2015.62 (1094.12) 1996.86 (1155.93) 0.883 \n status (%) 0.894 \n 0 83 (0.5) 85 (0.5) \n 1 10 (0.5) 9 (0.5) \n 2 65 (0.5) 60 (0.5) \n age (mean (SD)) 51.42 (11.01) 48.58 (9.96) 0.018 \n sex = f (%) 137 (0.5) 139 (0.5) 0.421 \n ascites = 1 (%) 14 (0.6) 10 (0.4) 0.567 \n hepato = 1 (%) 73 (0.5) 87 (0.5) 0.088 \n spiders = 1 (%) 45 (0.5) 45 (0.5) 0.985 \n\n" }, { "QuestionId": "76384509", "QuestionTitle": "Altair: showing the value of the current point in the tooltip", "QuestionBody": "In the code below, we have a dataset that can be read as: \"two cooks cook1, cook2 are doing a competition. They have to make four dishes, each time with two given ingredients ingredient1, ingredient2. A jury has scored the dishes and the grades are stored in _score.\nI want to use Altair to show a graph where the x-axis is each dish (1, 2, 3, 4) and the y-axis contains the scores of the two cooks separately. This currently works but the main issue is that on hover, the tooltip does not include the score of the current point that is being hovered.\nimport altair as alt\nimport pandas as pd\n\n\ndf = pd.DataFrame({\n \"ingredient1\": [\"potato\", \"onion\", \"carrot\", \"beet\"],\n \"ingredient2\": [\"tomato\", \"pepper\", \"zucchini\", \"lettuce\"],\n \"dish\": [1, 2, 3, 4],\n \"cook1\": [\"cook1 dish1\", \"cook1 dish2\", \"cook1 dish3\", \"cook1 dish4\"],\n \"cook1_score\": [0.4, 0.3, 0.7, 0.9],\n \"cook2\": [\"cook2 dish1\", \"cook2 dish2\", \"cook2 dish3\", \"cook2 dish4\"],\n \"cook2_score\": [0.6, 0.2, 0.5, 0.6],\n})\n\n\nvalue_vars = [c for c in df.columns if c.endswith(\"_score\")]\ncook_names = [c.replace(\"_score\", \"\") for c in value_vars]\nid_vars = [\"dish\", \"ingredient1\", \"ingredient2\",] + cook_names\ndf_melt = df.melt(id_vars=id_vars, value_vars=value_vars,\n var_name=\"cook\", value_name=\"score\")\n\nchart = alt.Chart(df_melt).mark_circle().encode(\n x=alt.X(\"dish:O\", title=\"Dish number\"),\n y=alt.Y(\"score:Q\", title=\"Score\"),\n color=\"cook:N\",\n tooltip=id_vars\n)\n\nchart.show()\n\n\nI tried explicitly adding the score columns to the tooltip:\n tooltip=id_vars+value_vars\n\nBut that yields the following error:\n\nValueError: cook1_score encoding field is specified without a type; the type cannot be inferred because it does not match any column in the data.\n\nSo how can I get altair to also show the score of (only) the currently hovered element?\n", "AnswerId": "76384598", "AnswerBody": "cook1_score is not a column in df_melt, which is why you see the error. Setting tooltip=id_vars+['score'] will work.\n" }, { "QuestionId": "76384490", "QuestionTitle": "Flutter: Inconsistent column padding on Buttons between Android and Windows", "QuestionBody": "I have created a simple material app in flutter with:\nflutter create --platforms=android,windows columntest\nWhen I run the program on Android and Windows, I get some kind of padding between the ElevatedButtons on Android, but not on Windows. Do you know where this comes from and how I can make the design consistent?\nThe behavior seems to occur only with buttons (TextButton, OutlinedButton, ElevatedButton).\nI have also tested this with container (with border), there it does not occur.\nHere the code from the small app:\nimport 'package:flutter/material.dart';\n\nvoid main() {\n runApp(const MyApp());\n}\n\nclass MyApp extends StatelessWidget {\n const MyApp({super.key});\n\n @override\n Widget build(BuildContext context) {\n return MaterialApp(\n title: 'Flutter Demo',\n home: Scaffold(\n body: Center(\n child: Column(\n crossAxisAlignment: CrossAxisAlignment.center,\n mainAxisAlignment: MainAxisAlignment.center,\n children: [\n ElevatedButton(child: const Text(\"Foobar1\"), onPressed: () {}),\n ElevatedButton(child: const Text(\"Foobar2\"), onPressed: () {}),\n ],\n ),\n ),\n ),\n );\n }\n}\n\n\nHere is a screenshot at runtime:\n\nHere my flutter version:\n$ flutter --version\nFlutter 3.10.0 • channel stable • https://github.com/flutter/flutter.git\nFramework • revision 84a1e904f4 (3 weeks ago) • 2023-05-09 07:41:44 -0700\nEngine • revision d44b5a94c9\nTools • Dart 3.0.0 • DevTools 2.23.1\n\nMy Android Emulator is an: Pixel_3a_API_33_x86_64\nBut the behaviour also occurs on my physical Pixel 6 (with android UpsideDownCake)\nI look forward to your responses.\nbest regards\nMichael\n", "AnswerId": "76384624", "AnswerBody": "So, this implementation is done by flutter.\nThis is behaviour is because of the ThemeData.materialTapTargetSize parameter for the MaterialApp.\nThis feature decides what should be touchable dimensions of Material Button, in your case ElevatedButton.\nYou have 2 potential solutions\n\nChange padding from ElevatedButton like below\n\n ElevatedButton(\n onPressed: () {},\n style: const ButtonStyle(padding: MaterialStatePropertyAll(EdgeInsets.zero)),\n child: const Icon(Icons.abc),\n ),\n\n\nChange value from material app\n\n MaterialApp(\n title: 'Flutter Demo',\n theme: ThemeData(\n primarySwatch: Colors.blue,\n materialTapTargetSize: MaterialTapTargetSize.shrinkWrap),\n home: CupertinoPickerExample(),\n )\n\nReference : https://stackoverflow.com/a/67580951\n" }, { "QuestionId": "76378581", "QuestionTitle": "Perl Mojolicious: Passing arguments to a code ref", "QuestionBody": "In my Mojolicious Controller, I have:\nmy @promise;\nforeach my $code (\\&doit1, \\&doit2,) {\n my $prom = Mojo::Promise->new;\n Mojo::IOLoop->subprocess(\n sub {\n my $r = $code->(\"Hello\");\n return $r;\n },\n sub {\n my ($subprocess, $err, @res) = @_;\n return $prom->reject($err) if $err;\n $prom->resolve(@res);\n },\n );\n push @promise, $prom;\n}\n\nMojo::Promise\n ->all(@promise)\n ->then(\n sub {\n my ($result1, $result2) = map {$_->[0]} @_;\n });\n\nThis works, and I can pass arguments (e.g. Hello) to my sub.\nNow I converted doti1() and doit2() as helpers. So the code looks like:\nforeach my $code (sub {$self->myhelper->doit1(\"Goodbye\")},\n sub {$self->myhelper->doit2(\"Good night\")},\n ) {\n my $prom = Mojo::Promise->new;\n Mojo::IOLoop->subprocess(\n sub {\n my $r = $code->(\"Hello\"); # this is ignored?\n return $r;\n },\n sub {\n my ($subprocess, $err, @res) = @_;\n return $prom->reject($err) if $err;\n $prom->resolve(@res);\n },\n );\n push @promise, $prom;\n}\n\nHow can I continue to pass the same set of arguments inside the loop (e.g. Hello), without having to specify them in each code ref (i.e. avoid Goodbye & Good night)? I like the idea of passing the same arguments for each code ref: $code->(\"Hello\")\n", "AnswerId": "76379917", "AnswerBody": "\nNow I converted doti1() and doit2() as helpers. So the code looks like:\n\nforeach my $code (sub {$self->myhelper->doit1(\"Goodbye\")},\n sub {$self->myhelper->doit2(\"Good night\")},\n ) {\n #....\n}\n\nYes but you are calling the helpers from another anonymous sub,\n\nHow can I continue to pass the same set of arguments inside the loop (e.g. Hello), without having to specify them in each code ref\n\nso to recover the argument and pass it on the the helper, you just do:\nforeach my $code (sub {my $arg = shift; $self->myhelper->doit1($arg)},\n sub {my $arg = shift; $self->myhelper->doit2($arg)},\n) {...}\n\nor more generally as @Dada pointed out in the comments:\nforeach my $code (sub {$self->myhelper->doit1(@_)},\n sub {$self->myhelper->doit2(@_)},\n) {...}\n\n" }, { "QuestionId": "76378589", "QuestionTitle": "How can I parse the certificate information output from the security command in Mac?", "QuestionBody": "I need to retrieve the attributes of a certificate that is stored in the keychain on my Mac from the command line. I can collect them manually from the Keychain Access app, but I want to do that with a script.\n\nI used the security command to get a certificate and \"grep\" to inspect the \"subject\" section:\nsecurity find-certificate -c \"Apple Development\" login.keychain | grep \"subj\"\n\nand then got the following output (some omitted by \"...\").\n\"subj\"=0x3081943...553 \"0\\201\\2241\\0320\\03...02US\"\n\nIn the output above, what format is the data following \"subj\"= and how can I parse it? I found that decoding the first half of the hexadecimal sequence(0x30...) with UTF-8 yields the second half of the string (0\\201...), but I don't know what 0\\201\\2241\\... means. I have tried other character codes, but they just give me garbled characters.\n", "AnswerId": "76381943", "AnswerBody": "As for the format, the certificates are stored in DER/PEM format, which is a representation of ASN.1 encoded data. What you see in the output is the hexadecimal representation of the ASN.1 binary data. The blob indicates that the value or attribute is stored as binary data.\nAs for exporting (for certificates), I would highly recommend combining security with openssl as follows:\nsecurity find certificate -p -c \"Apple Development\" login.keychain | openssl x509 -noout -subject\n\nThe -p option in the security command exports the found certificate in PEM format, which is something openssl can use. You can then pipe the PEM data into the openssl command, where one can easily extract the subject using the -subject option.\nYou can check out both the man page of security and the man page of openssl x509.\n" }, { "QuestionId": "76384082", "QuestionTitle": "error messages fitting a non-linear exponential model between two variables", "QuestionBody": "I have two variables that I'm trying to model the relationship between and extract the residuals. The relationship between the two variables is clearly a non-linear exponential relationship. I've tried a few different approaches with nls, but I keep getting different error messages.\n\n# dataset\ndf <- structure(list(y = c(464208.56, 334962.43, 361295.68, 426535.68, 258843.93, 272855.46, \n 166322.72, 244695.28, 227003.03, 190728.4, 156025.45, 72594.24, 56911.4, 175328.95, 161199.76, \n 152520.77, 190610.57, 60734.34, 31620.9, 74518.86, 45524.49, 2950.58, 2986.38, 15961.77, 12484.05, \n 6828.41, 2511.72, 1656.12, 5271.4, 7550.66, 3357.71, 3620.43, 3699.85, 3337.56, 4106.55, 3526.66, \n 2996.79, 1649.89, 4561.64, 1724.25, 3877.2, 4426.69, 8557.61, 6021.61, 6074.17, 4072.77, 4032.95, \n 5280.16, 7127.22), \n x = c(39.23, 38.89, 38.63, 38.44, 38.32, 38.27, 38.3, 38.4, 38.56, 38.79, 39.06, 39.36, 39.68, \n 40.01, 40.34, 40.68, 41.05, 41.46, 41.93, 42.48, 43.14, 43.92, 44.84, 45.9, 47.1, 48.4, 49.78, \n 51.2, 52.62, 54.01, 55.31, 56.52, 57.6, 58.54, 59.33, 59.98, 60.46, 60.78, 60.94, 60.92, 60.71, \n 60.3, 59.69, 58.87, 57.86, 56.67, 55.33, 53.87, 52.33)), \n row.names = c(NA, -49L), \n class = c(\"tbl_df\", \"tbl\", \"data.frame\"), \n na.action = structure(c(`1` = 1L, `51` = 51L), \n class = \"omit\"))\n\n# initial model\nm <- nls(y ~ a * exp(r * x), \n start = list(a = 0.5, r = -0.2), \n data = df)\nError in nls(y ~ a * exp(r * x), start = list(a = 0.5, r = -0.2), data = df, : singular gradient\n\n# add term for alg\nm <- nls(y ~ a * exp(r * x), \n start = list(a = 0.5, r = -0.2), \n data = df,\n alg = \"plinear\")\nError in nls(y ~ a * exp(r * x), start = list(a = 0.5, r = -0.2), data = df, : \n step factor 0.000488281 reduced below 'minFactor' of 0.000976562\n\n", "AnswerId": "76384628", "AnswerBody": "log-Gaussian GLM\nAs @Gregor Thomas suggests you could linearize your problem (fit a log-linear regression), at the cost of changing the error model. (Basic model diagnostics, i.e. a scale-location plot, suggest that this would be a much better statistical model!) However, you can do this efficiently without changing the error structure by fitting a log-link Gaussian GLM:\nm1 <- glm(y ~ x, family = gaussian(link = \"log\"), data = df)\n\nThe model is y ~ Normal(exp(b0 + b1*x), s), so a = exp(b0), r = b1.\nI tried using list(a=exp(coef(m1)[1]), r=coef(m1)[2]) as starting values, but even this was too finicky for nls().\nThere are two ways to get nls to work.\nshifted exponential\nAs @GregorThomas suggests, shifting the x-axis to x=38 also works fine (given a sensible starting value):\nm <- nls(y ~ a * exp(r * (x-38)), \n start = list(a = 3e5, r = -0.35), \n data = df)\n\nprovide nls with a gradient\nThe deriv function will generate a function with the right structure for nls (returns the objective function, with a \".grad\" attribute giving a vector of derivatives) if you ask it nicely. (I'm also using the exponentiated intercept from the log-Gaussian GLM as a starting value ...)\nf <- deriv( ~ a*exp(r*x), c(\"a\", \"r\"), function.arg = c(\"x\", \"a\", \"r\"))\nm2 <- nls(y ~ f(x, a, r),\n start = list(a = exp(coef(m1)[1]), r = -0.35),\n data = df)\n\nWe can plot these to compare the predictions (visually identical):\npar(las = 1, bty = \"l\")\nxvec <- seq(38, 60, length = 101)\nplot(y ~ x, df)\nlines(xvec, predict(m1, newdata = data.frame(x=xvec), type = \"response\"),\n col = 2)\nlines(xvec, predict(m, newdata = data.frame(x=xvec)), col = 4, lty = 2)\nlines(xvec, predict(m2, newdata = data.frame(x=xvec)), col = 5, lty = 2)\n\n\nWith a little bit of extra work (exponentiating the intercept for the Gaussian GLM, shifting the x-origin back to zero for the nls fit) we can compare the coefficients (only equal up to a tolerance of 2e-4 but that should be good enough, right?)\na1 <- exp(coef(m1)[[1]])\na2 <- coef(m)[[1]]*exp(-38*coef(m)[[2]])\nall.equal(c(a = a1, r = coef(m)[[2]]),\n c(a = a2, r = coef(m1)[[2]]), tolerance = 1e-4)\nall.equal(c(a = a1, r = coef(m)[[2]]),\n coef(m2), tolerance = 2e-4)\n\n" }, { "QuestionId": "76382271", "QuestionTitle": "Function call as a parameter inside insert values statement", "QuestionBody": "I'm trying to insert the data inside a forall loop. For this case, I cannot use a temporary variable and set result of the function beforehand.\nThe function just maps a number to a string:\ncreate or replace function GetInvoiceStatus(status number)\n return nvarchar2\nas\nbegin\n case status\n when 0 then return 'New';\n when 200 then return 'Sent';\n when 300 then return 'Accepted';\n end case;\n\n return '';\nend; \n\nwhen I call this function like:\nselect GetInvoiceStatus(200) from dual;\n\nI get the appropriate result.\nHowever, when I try to insert the data I get errors.\nThe forall insert:\nforall i in 1.. INVOICE_DATA.COUNT\ninsert into \"InvoiceAudit\"\n(\"PropertyName\", \"OldValue\", \"NewValue\" (\n VALUES ('Status', (GetInvoiceStatus(invoice_data(i).status)),\n ((GetInvoiceStatus((select \"Status\" from \"Invoice\" where \"InvoiceId\" = invoice_data(i).invoiceId)))));\n\nHowever, I get the following error:\n\n[2023-06-01 15:02:57] [65000][6592] [2023-06-01 15:02:57] ORA-06592:\nCASE not found while executing CASE statement [2023-06-01 15:02:57]\nORA-06512: at \"PUBLIC.GETINVOICESTATUS\", line 9 [2023-06-01 15:02:57]\nORA-06512: at \"PUBLIC.INVOICESSP\", line 63 [2023-06-01 15:02:57]\nPosition: 5\n\nI have double checked, and the results from invoice_data(i).Status and the other select value are both valid parameters (and have their cases covered) and return appropriate string when called outside the stored procedure.\nIs the syntax somewhere wrong?\nI would like to remain using forall if at all possible because it is much faster than a regular for loop.\n", "AnswerId": "76382378", "AnswerBody": "This error means that the parameter value (status) is not one of the cases in the case expression (which are 0, 200, 300).\nIf you executed this code select GetInvoiceStatus(555) as dd from dual you will get the same error. So, add ELSE clause like this:\ncreate or replace function GetInvoiceStatus(status number)\n return nvarchar2\nas\nbegin\n case status\n when 0 then return 'New';\n when 200 then return 'Sent';\n when 300 then return 'Accepted';\n else return '';\n end case;\nend; \n\n" }, { "QuestionId": "76384531", "QuestionTitle": "pivot returning blank instead of 0 google sheet", "QuestionBody": "I have a spreadsheet where I have an importrange and vlookup to another file where its looking up to a pivot table. Some data is blank in the pivot table and when I lookup in the formula, I have a result of blank even though I have set it to return to 0 by iferror.\nHere's my formula:\n=iferror(VLOOKUP(A5,importrange(\"12PaJfEC7Q7gOcCx2zlMHG3YybQuk1TSsNjZDw26qFRg\",\"Converted Pivot!A:E\"),3,false),0)\n", "AnswerId": "76384635", "AnswerBody": "You may try:\n=let(Σ,ifna(vlookup(A5,importrange(\"12PaJfEC7Q7gOcCx2zlMHG3YybQuk1TSsNjZDw26qFRg\",\"Converted Pivot!A:E\"),3,),\"no_match_found\"),\n if(Σ=\"\",0,Σ))\n\n\nblank_value will now be shown as 0 & a non-match output error will be prompted with no_match_found\n\n" }, { "QuestionId": "76380577", "QuestionTitle": "Make an element not scroll horizontally", "QuestionBody": "I am trying to make a layout with:\n\nA header (gray block in the snippet)\nA body (lime borrder)\nMain body content ( blocks with red border)\n\nIf you scroll horizontally, then the header should not scroll, it should be full width and stay in view. If you scroll vertically, then the header should scroll off the page as usual. The height of the header is dynamic, and fits the content within it (this SO answer works with a fixed height)..\nThe
element is allowed to be wider than the viewport, but the header is always the viewport width.\nThe reason I dont add max-width: 100%; overflow-x: auto on the
element (like this SO answer, is because then the horizontal scroll appears at the bottom of the element, and then say one is reading the first block, and you wish to scroll horizontally, you have to scroll to the bottom of the main element to see the horizontal scroll bar, scroll to the side, then scroll back up. I wish to have the horizontal scroll bar always present if main is wider than the view port.\nI have tried position: sticky/fixed on the header but could not get it to work.\nI would prefer not to use JavaScript if possible.\n\n\nheader {\n padding: 32px;\n background: gray;\n width: 100%;\n}\nmain {\n border: 2px solid lime;\n min-width: 100%;\n}\ndiv {\n height: 200px;\n width: 120%; /* make it overflow horizontally */\n display: flex;\n align-items: center;\n justify-content: center;\n border: 2px solid red;\n}\n
The Header should not scroll horizntally
(is dynamic height)
\n
\n
content 1
\n
content 2
\n
content 3
\n
content 4
\n
content 5
\n
content 6
\n
\n\n\n\n", "AnswerId": "76381169", "AnswerBody": "What I have done here is make header sticky to the left part of the screen. Its parent element must be aware of size of your content to allow header to move. So I set body min-width to min-content and same with main so it can transfer its children's size to body.\nYou also may notice I used box-sizing: border-box; in the header, its so padding size is taken into account when element size is calculated(100vw in this case). You don´t want to use % on header width because it won´t have room to slide.\nAlso div sizes must not be dependent on parent size, so you can´t use % here either.\n\n\nbody{\n min-width: min-content;\n}\n\nheader {\n box-sizing: border-box;\n position: sticky;\n left: 0;\n padding: 32px;\n background: gray;\n width: 100vw;\n}\nmain {\n min-width: min-content;\n border: 2px solid lime;\n}\ndiv {\n height: 200px;\n width: 120vw; /* make it overflow horizontally */\n display: flex;\n align-items: center;\n justify-content: center;\n border: 2px solid red;\n}\n\n\n
The Header should not scroll horizntally
(is dynamic height)
\n
\n
content 1
\n
content 2
\n
content 3
\n
content 4
\n
content 5
\n
content 6
\n
\n\n\n\n\n" }, { "QuestionId": "76382239", "QuestionTitle": "\"Unused CSS selector\" when using a SASS themify mixin with Svelte and Vite:", "QuestionBody": "I'm trying to create a small web application using Svelte.\nOne of the requirements is to be able to change the application \"theme\" on demand, for example - dark theme, light theme, high contrast, and so on.\nI've been using an online mixin snippet to help me with that -\nhttps://medium.com/@dmitriy.borodiy/easy-color-theming-with-scss-bc38fd5734d1\nHowever, this doesn't work consistently, and I often get errors like:\n[vite-plugin-svelte] /path/to/svelte/component.svelte:61:0 Unused CSS selector \"main.default-theme div.some.element.identification\"\neven tho the selector is used and is receiving it's non-themed attributes.\nInside a themes.scss file:\n@mixin themify($themes) {\n\n @each $theme,\n $map in $themes {\n main.#{$theme}-theme & {\n $theme-map: () !global;\n\n @each $key,\n $submap in $map {\n $value: map-get(map-get($themes, $theme), '#{$key}');\n $theme-map: map-merge($theme-map, ($key: $value)) !global;\n }\n\n @content;\n $theme-map: null !global;\n }\n }\n}\n\n@function themed($key) {\n @return map-get($theme-map, $key);\n}\n\n$themes: (\n default: (\n strokeColor: green,\n fillColor: red,\n ),\n);\n\nand inside another scss file that is importing themes.scss:\ndiv.some.element.identification {\n some-non-themed-attribute: some-value;\n\n @include themify($themes) {\n stroke: themed('strokeColor');\n fill: themed('fillColor');\n }\n}\n\nnow the punchline - when using this methodology, some elements are receiving their appropriate themed attributes, and others dont.\nI am also seeing the following error:\n[vite-plugin-svelte] /path/to/svelte/component.svelte:61:0 Unused CSS selector \"main.default-theme div.some.element.identification\"\nthe issue doesn't seem to be in the css selectors - since the elements that dont receive the themed attributes, still receive the other non-themed attributes in the same css clause.\nTwo final observations -\n\nWhen I'm building the project (using vite build), I can see that the css asset file being created doesn't include the css selectors that are missing their themed attributes.\nWhen i'm using the devtools to locate the supposedly unused selectors (whose themed attributes are not present), they can be found - despite the error message.\n\nI've been trying different way to solve this issue and nothing works consistently.\nThank you in advance for your help!\n", "AnswerId": "76382400", "AnswerBody": "You could try checking these different items:\n\nIf you use svelte-preprocess, try to add scss: { prependData: `@import 'src/styles/theme.scss';` } or whatever the path to your theme is, to the config object.\nIf it still does not work, maybe try to swap svelte-preprocess with vite-preprocess\nDisable any potential css purge plugin\n\n" }, { "QuestionId": "76384567", "QuestionTitle": "Is it faster to use push_back(x) or using an index (capacity)?", "QuestionBody": "I learned 2 ways of inserting elements into a vector.\nAnd I've been wondering which way is faster since I'm working with time limits.\nMethod 1:\nint n;\ncin>>n;\nvector v(n);\nfor(int i = 0;i>v[i];\n}\n\nMethod 2:\nint n;\ncin>>n;\nvector v;\nfor(int i = 0;i>x;\n v.push_back(x);\n}\n\nIf you have a better method to suggest, it'd be appreciated!\n", "AnswerId": "76384661", "AnswerBody": "Both have issues:\nYou should be using reserve(n)\nint n;\ncin >> n;\nvector v;\nv.reserve(n);\nfor(int i = 0; i < n; ++i){\n int x;\n cin >> x;\n v.emplace_back(x);\n}\n\nIn the first version: Setting size.\nHere you have the issue that you are constructing all the elements in the array. Now for integers this may be insignificant. But if we extend this to non integer types that have a constructor that needs to be called for each element and then you are using the assignment operator to copy over them.\nThe second option: push_back\nHere you run into the risk of the underlying storage being reallocated (potentially multiple times). Each time you re-allocate you need to copy the data from the old storage to the new storage.\nAgain this hurts for integers but really hurts for types with constructors and destructors.\nPrefer: emplace_back()\nRather than pushing where you need a fully constructed object. You can use emplace_back and pass in the objects used to construct the object. This allows the vector to construct the object in place. If you have simple integers or classes with effecient move semantics then not an issue but worth it as a general habit.\n" }, { "QuestionId": "76382402", "QuestionTitle": "Background video in Node.js 13", "QuestionBody": "I am trying to set up a gif as a background,I get does it not work:\nIn the code Import GridMatrix and extract the src from it, then I use the video tag to try to render it on fullscreen.\nimport React from 'react';\nimport GridMatrix from '../assets/gridMatrix.gif';\n\nfunction Home() {\n\n return (\n
\n \n \n \n
\n

\n UNS Demo\n

\n \n\n
\n
\n );\n}\n\nexport default Home;\n\n\n", "AnswerId": "76382476", "AnswerBody": "GIF files are not video files and the MIME type for them is image/gif. The